[openstack-dev] [Savanna] Spark plugin status

2014-01-08 Thread Daniele Venzano

Hello,

we are finishing up the development of the Spark plugin for Savanna.
In the next few days we will deploy it on an OpenStack cluster with real 
users to iron out the last few things. Hopefully next week we will put 
the code on a public github repository in beta status.


You can find the blueprint here:
https://blueprints.launchpad.net/savanna/+spec/spark-plugin

There are two things we need to release, the VM image and the code itself.
For the image we created one ourselves and for the code we used the 
Vanilla plugin as a base.


We feel that our work could be interesting for others and we would like 
to see it integrated in Savanna. What is the best way to proceed?


We did not follow the Gerrit workflow until now because development 
happened internally.
I will prepare the repo on github with git-review and reference the 
blueprint in the commit. After that, do you prefer that I send 
immediately the code for review or should I send a link here on the 
mailing list first for some feedback/discussion?


Thank you,
Daniele Venzano, Hoang Do and Vo Thanh Phuc

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discuss the option delete_on_termination

2014-01-08 Thread Christopher Yeoh
On Thu, Jan 9, 2014 at 2:35 PM, 黎林果  wrote:

> Hi Chris,
> Thanks for you reply.
>
> It's not only hard coded for swap volumes. In function
> '_create_instance' which for creating instance of nova/compute/api.py,
> the '_prepare_image_mapping' function will be called. And it hard code
> to True, too.
>
> values = block_device.BlockDeviceDict({
> 'device_name': bdm['device'],
> 'source_type': 'blank',
> 'destination_type': 'local',
> 'device_type': 'disk',
> 'guest_format': guest_format,
> 'delete_on_termination': True,
> 'boot_index': -1})
>
>
Just before that in _prepare_image_mapping is:

if virtual_name == 'ami' or virtual_name == 'root':
continue

if not block_device.is_swap_or_ephemeral(virtual_name):
continue


Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-08 Thread Swapnil Kulkarni
Hi Eric,

I tried running the 'docker run' command without -d and it gets following
error

$ sudo docker run -d=false -p 5042:5000 -e SETTINGS_FLAVOR=openstack -e
OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
OS_GLANCE_URL=http://127.0.0.1:9292 -e OS_AUTH_URL=
http://127.0.0.1:35357/v2.0 docker-registry ./docker-registry/run.sh
lxc-start: No such file or directory - stat(/proc/16438/root/dev//console)
2014/01/09 06:36:15 Unable to locate ./docker-registry/run.sh

On the other hand,

If I run the failing command just after stack.sh fails with -d,  it works
fine,

sudo docker run -d -p 5042:5000 -e SETTINGS_FLAVOR=openstack -e
OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
OS_GLANCE_URL=http://127.0.0.1:9292 -e OS_AUTH_URL=
http://127.0.0.1:35357/v2.0 docker-registry ./docker-registry/run.sh
5b737f8d2282114c1a0cfc4f25bc7c9ef8c5da7e0d8fa7ed9ccee0be81cddafc

Best Regards,
Swapnil


On Wed, Jan 8, 2014 at 8:29 PM, Eric Windisch  wrote:

> On Tue, Jan 7, 2014 at 11:13 PM, Swapnil Kulkarni <
> swapnilkulkarni2...@gmail.com> wrote:
>
>> Let me know in case I can be of any help getting this resolved.
>>
>
> Please try running the failing 'docker run' command manually and without
> the '-d' argument. I've been able to reproduce  an error myself, but wish
> to confirm that this matches the error you're seeing.
>
> Regards,
> Eric Windisch
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discuss the option delete_on_termination

2014-01-08 Thread 黎林果
Hi Chris,
Thanks for you reply.

It's not only hard coded for swap volumes. In function
'_create_instance' which for creating instance of nova/compute/api.py,
the '_prepare_image_mapping' function will be called. And it hard code
to True, too.

values = block_device.BlockDeviceDict({
'device_name': bdm['device'],
'source_type': 'blank',
'destination_type': 'local',
'device_type': 'disk',
'guest_format': guest_format,
'delete_on_termination': True,
'boot_index': -1})

I found it set to true when creating a new instance and set to false
for an exist instance.

Regards,
Lee

2014/1/9 Christopher Yeoh :
> On Thu, Jan 9, 2014 at 9:25 AM, 黎林果  wrote:
>>
>> Hi All,
>>
>>Attach a volume when creating a server, the API contains
>> 'block_device_mapping', such as:
>> "block_device_mapping": [
>> {
>> "volume_id": "",
>> "device_name": "/dev/vdc",
>> "delete_on_termination": "true"
>> }
>> ]
>>
>> It allows the option 'delete_on_termination', but in the code it's
>> hardcoded to True. Why?
>
>
> I don't think it does hardcode it to true. The API appears to be passing it
> down
> correctly. I can see one case of delete_on_termination being set to true,
> though thats
> for ephemeral or swap volumes.
>
>>
>> Another situation, attach a volume to an exists server, there is
>> not the option 'delete_on_termination'.
>>
>>   Should we add the 'delete_on_termination' when attach a volume to an
>> exists server or modify the value from the params?
>>
>
> I think adding delete_on_termination option when attaching a volume to an
> existing server
> is reasonable. Perhaps add it to the v3 API?
>
> Regards,
>
> Chris
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [wsme] Undefined attributes in WSME

2014-01-08 Thread Jamie Lennox
Is there any way to have WSME pass through arbitrary attributes to the created 
object? There is nothing that i can see in the documentation or code that would 
seem to support this. 

In keystone we have the situation where arbitrary data was able to be attached 
to our resources. For example there are a certain number of predefined 
attributes for a user including name, email but if you want to include an 
address you just add an 'address': 'value' to the resource creation and it will 
be saved and returned to you when you request the resource.

Ignoring whether this is a good idea or not (it's done), is the option there 
that i missed - or is there any plans/way to support something like this? 

Thanks, 

Jamie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Intermittent failure of tempest test test_network_basic_ops

2014-01-08 Thread Jay Pipes
On Wed, 2014-01-08 at 18:46 -0800, Sukhdev Kapur wrote:
> Dear fellow developers, 

> I am running few Neutron tempest tests and noticing an intermittent
> failure of tempest.scenario.test_network_basic_ops. 

> I ran this test 50+ times and am getting intermittent failure. The
> pass rate is apps. 70%. The 30% of the time it fails mostly in
> _check_public_network_connectivity. 

> Has anybody seen this? 
> If there is a fix or work around for this, please share your wisdom. 

Unfortunately, I believe you are running into this bug:

https://bugs.launchpad.net/nova/+bug/1254890

The bug is Triaged in Nova (meaning, there is a suggested fix in the bug
report). It's currently affecting the gate negatively and is certainly
on the radar of the various PTLs affected.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Adam Young
We are working on cleaning up the Keystone code with an eye to Oslo and 
reuse:


https://review.openstack.org/#/c/56333/

On 01/08/2014 02:47 PM, Georgy Okrokvertskhov wrote:

Hi,

Keep policy control in one place is a good idea. We can use standard 
policy approach and keep access control configuration in json file as 
it done in Nova and other projects.
Keystone uses wrapper function for methods. Here is a wrapper code: 
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111. 
Each controller method has @protected() wrapper, so a method 
information is available through python f.__name__ instead of URL 
parsing. It means that some RBAC parts anyway scattered among the code.


If we want to avoid RBAC scattered among the code we can use URL 
parsing approach and have all the logic inside hook. In pecan hook 
WSGI environment is already created and there is full access to 
request parameters\content. We can map URL to policy key.


So we have two options:
1. Add wrapper to each API method like all other project did
2. Add a hook with URL parsing which maps path to policy key.


Thanks
Georgy



On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>> 
wrote:


Yeah, that could work. The main thing is to try and keep policy
control in one place if you can rather than sprinkling it all over
the place.

From: Georgy Okrokvertskhov mailto:gokrokvertsk...@mirantis.com>>
Reply-To: OpenStack Dev mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, January 8, 2014 at 10:41 AM

To: OpenStack Dev mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
SecureController vs. Nova policy

Hi Kurt,

As for WSGI middleware I think about Pecan hooks which can be
added before actual controller call. Here is an example how we
added a hook for keystone information collection:
https://review.openstack.org/#/c/64458/4/solum/api/auth.py

What do you think, will this approach with Pecan hooks work?

Thanks
Georgy


On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths
mailto:kurt.griffi...@rackspace.com>> wrote:

You might also consider doing this in WSGI middleware:

Pros:

  * Consolidates policy code in once place, making it easier
to audit and maintain
  * Simple to turn policy on/off -- just don't insert the
middleware when off!
  * Does not preclude the use of oslo.policy for rule checking
  * Blocks unauthorized requests before they have a chance to
touch the web framework or app. This reduces your attack
surface and can improve performance   (since the web
framework has yet to parse the request).

Cons:

  * Doesn't work for policies that require knowledge that
isn't available this early in the pipeline (without having
to duplicate a lot of code)
  * You have to parse the WSGI environ dict yourself (this may
not be a big deal, depending on how much knowledge you
need to glean in order to enforce the policy).
  * You have to keep your HTTP path matching in sync with with
your route definitions in the code. If you have full test
coverage, you will know when you get out of sync. That
being said, API routes tend to be quite stable in relation
to to other parts of the code implementation once you have
settled on your API spec.

I'm sure there are other pros and cons I missed, but you can
make your own best judgement whether this option makes sense
in Solum's case.

From: Doug Hellmann mailto:doug.hellm...@dreamhost.com>>
Reply-To: OpenStack Dev mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, January 7, 2014 at 6:54 AM
To: OpenStack Dev mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
SecureController vs. Nova policy




On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov
mailto:gokrokvertsk...@mirantis.com>> wrote:

Hi Dough,

Thank you for pointing to this code. As I see you use
OpenStack policy framework but not Pecan security
features. How do you implement fine grain access control
like user allowed to read only, writers and admins. Can
you block part of API methods for specific user like
access to create methods for specific user role?


The policy enforcement isn't simple on/off switching in
ceilometer, so we're using the policy framework calls in a
couple of places within our API code (look through v2.py for
examples). As a result, we didn't need to build much on top of
the existing policy module to interface with pecan.

   

Re: [openstack-dev] [Neutron][LBaaS] Weekly meeting Thursday 09.01.2014

2014-01-08 Thread Vijay Venkatachalam
Can you include the following in the agenda?

1.   External/3rd Party testing

2.   Common code for collecting status/statistics


From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, January 08, 2014 7:58 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Weekly meeting Thursday 09.01.2014

Hi neutrons,

Lets continue keeping our regular lbaas meetings. Let's gather on 
#openstack-meeting at 14-00 UTC on this Thursday, 09.01.2014.

We'll discuss our progress and future plans.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Sharing the load test result

2014-01-08 Thread Deok-June Yi
Hi, guys.

> jay wrote:
> > So you are saying that the Synaps server is storing 14,400,000 samples
> > in memory (2 days of 5000 samples per minute)? Or are you saying that
> > Synaps is storing just the 5000 alarm records in memory and then
> > processing (determining if the alarm condition was met) the samples as
> > they pass through to a backend data store? I think it is the latter but
> > I just want to make sure :)
> 
> Swann wrote:
> > @jay : the first case seems to be impossible, no scalable .. I bet for 
> > the last :)

Jay and Swann, your guess is right.

Synaps holds samples in memory rolled up by 1 minute resolution in its 
sliding windows per stream. The size of sliding window is 5 minutes 
by default. It helps rolling samples up without DB read operation.

So, if there was no alarm, Synaps would hold 25,000 samples (5 
minutes of 5,000 samples per minute) in memory.

When a stream has alarms, its sliding window grows according to the 
longgest 'periods * evaluation periods + default window size' of its
alarms.

In the load test case, Synaps held 5,000 alarms and 70,000 samples 
(the recent 14 minutes of 5,000 samples) in memory as they pass 
through to a backend data store. Because the alarms had 3 minutes 
periods and 3 times of evaluation periods and default window size is 
5 minutes. (3 * 3 + 5 = 14)

Swann wrote:
> The Ceilo team will work on the improvements IIUC.
> I found two relevant links [1] [2]
> [1] https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements
> [2] 
> https://etherpad.openstack.org/p/icehouse-summit-ceilometer-future-of-alarming

Thank you for the useful links. But I just want to point out that Synaps 
has already implemented some important things in the blueprint.

Swann wrote:
> @June Yi
> I am curious to know how have you generate load to Ceilometer with 
> Ganglia ?
> 
> what was the system usage of your servers during the 2 tests  ? cpu,
> mem, io..

Ganglia was just for collecting performance data. I used my own load 
generator script. Here I attach performance data collected by ganglia. 
Please keep in mind that evaluation throughput of Ceilometer was lower 
than Synaps.

> what are response time for alarm evaluations for Ceilometer, 50 seconds 
> in mean  ?

Mean(or average) is important. But in the aspect of real-time constraint, 
I think predictability is also important. I think that there are too many 
variable 
factors in alarm evaluation in current Ceilometer to adapt it as a solution of 
'monitoring as a service'.

Best regards,
June Yi



loadtest_result.png
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discuss the option delete_on_termination

2014-01-08 Thread Christopher Yeoh
On Thu, Jan 9, 2014 at 9:25 AM, 黎林果  wrote:

> Hi All,
>
>Attach a volume when creating a server, the API contains
> 'block_device_mapping', such as:
> "block_device_mapping": [
> {
> "volume_id": "",
> "device_name": "/dev/vdc",
> "delete_on_termination": "true"
> }
> ]
>
> It allows the option 'delete_on_termination', but in the code it's
> hardcoded to True. Why?
>

I don't think it does hardcode it to true. The API appears to be passing it
down
correctly. I can see one case of delete_on_termination being set to true,
though thats
for ephemeral or swap volumes.


> Another situation, attach a volume to an exists server, there is
> not the option 'delete_on_termination'.
>
>   Should we add the 'delete_on_termination' when attach a volume to an
> exists server or modify the value from the params?
>
>
I think adding delete_on_termination option when attaching a volume to an
existing server
is reasonable. Perhaps add it to the v3 API?

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][qa] Intermittent failure of tempest test test_network_basic_ops

2014-01-08 Thread Sukhdev Kapur
Dear fellow developers,

I am running few Neutron tempest tests and noticing an intermittent failure
of tempest.scenario.test_network_basic_ops.

I ran this test 50+ times and am getting intermittent failure. The pass
rate is apps. 70%. The 30% of the time it fails mostly in
_check_public_network_connectivity.

Has anybody seen this?
If there is a fix or work around for this, please share your wisdom.

Thanks
-Sukhdev



Here is the Traceback:

Traceback (most recent call last):
  File "tempest/scenario/test_network_basic_ops.py", line 300, in
test_network_basic_ops
self._check_public_network_connectivity(should_connect=True)
  File "tempest/scenario/test_network_basic_ops.py", line 269, in
_check_public_network_connectivity
raise exc
AssertionError: Timed out waiting for 172.24.4.5 to become reachable
==
FAIL: process-returncode
tags: worker-0
--
Binary content:
  traceback (test/plain; charset="utf8")
Ran 2 tests in 185.446s (-142.730s)
FAILED (id=38, failures=2)


Full Log of tempest.log is here :

http://paste.openstack.org/show/60847/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tommorow meeting at 2000 UTC

2014-01-08 Thread Joshua Harlow
Hi all,


The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow, 
2014-01-09!!!


As usual, everyone is welcome :-)


Link: https://wiki.openstack.org/wiki/Meetings/StateManagement

Taskflow: https://wiki.openstack.org/TaskFlow


## Agenda (30-60 mins):


- Discuss any action items from last meeting.

- Discuss 0.1.2 release and reviews (and sqlalchemy issue/adjustments/testing).

- Discuss 0.2.0 release and timeline and reviews.

- Discuss joining oslo? (yah, nah?).

- Discuss integration progress, help needed, other...

- Discuss ongoing checkpointing, and where checkpoints should live (flow, 
engine, elsewhere?).

- Discuss scoping review/idea.

- Discuss about any other potential new use-cases for said library.

- Discuss about any other ideas, questions and answers (and more!).


Any other topics are welcome :-)


See you all soon!


--


Joshua Harlow


It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress

2014-01-08 Thread Prasad Vellanki
Clint & Steve
One scenario we are trying to see is whether and how Heat software-config
enables  deployment of  images available from third party as virtual
appliances,  providing network, security or acceleration capabilities. The
vendor in some cases might not allow rebuilding and/or  may not have the
cloud init capability.Sometimes changes to the image could run into issues
with licensing. Bootstrapping in such situations is generally done via rest
api or ssh once the appliance boots up where one can bootstrap it further.

We are looking at how to automate deployment of such service functions
using new configuration and deployment  model in Heat which we really like.

One option is that software-config can provide an option in Heat to trigger
bootstrapping that can be done from outside rather than inside,  as done by
 cloud-init, and does bootstrapping of appliances using ssh and/or rest.

Another option is there could be an agent outside that recognizes this kind
of service coming up and then inform Heat  to go to next state to configure
the deployed resource. This is more like a proxy model.

thanks
prasadv



On Tue, Jan 7, 2014 at 11:40 AM, Clint Byrum  wrote:

> I'd say it isn't so much cloud-init that you need, but "some kind
> of bootstrapper". The point of hot-software-config is to help with
> in-instance orchestration. That's not going to happen without some way
> to push the desired configuration into the instance.
>
> Excerpts from Susaant Kondapaneni's message of 2014-01-07 11:16:16 -0800:
> > We work with images provided by vendors over which we do not always have
> > control. So we are considering the cases where vendor image does not come
> > installed with cloud-init. Is there a way to support heat software config
> > in such scenarios?
> >
> > Thanks
> > Susaant
> >
> > On Mon, Jan 6, 2014 at 4:47 PM, Steve Baker  wrote:
> >
> > >  On 07/01/14 06:25, Susaant Kondapaneni wrote:
> > >
> > >  Hi Steve,
> > >
> > >  I am trying to understand the software config implementation. Can you
> > > clarify the following:
> > >
> > >  i. To use Software config and deploy in a template, instance resource
> > > MUST always be accompanied by user_data. User_data should specify how
> to
> > > bootstrap CM tool and signal it. Is that correct?
> > >
> > >   Yes, currently the user_data contains cfn-init formatted metadata
> which
> > > tells os-collect-config how to poll for config changes. What happens
> when
> > > new config is fetched depends on the os-apply-config templates and
> > > os-refresh-config scripts which are already on that image (or set up
> with
> > > cloud-init).
> > >
> > >  ii. Supposing we were to use images which do not have cloud-init
> > > packaged in them, (and a custom CM tool that won't require
> bootstrapping on
> > > the instance itself), can we still use software config and deploy
> resources
> > > to deploy software on such instances?
> > >
> > >   Currently os-collect-config is more of a requirement than cloud-init,
> > > but as Clint said cloud-init does a good job of boot config so you'll
> need
> > > to elaborate on why you don't want to use it.
> > >
> > >  iii. If ii. were possible who would signal the deployment resource to
> > > indicate that the instance is ready for the deployment?
> > >
> > > os-collect-config polls for the deployment data, and triggers the
> > > resulting deployment/config changes. One day this may be performed by a
> > > different agent like the unified agent that has been discussed.
> Currently
> > > os-collect-collect polls via a heat-api-cfn metadata call. This too
> may be
> > > done in any number of ways in the future such as messaging or
> long-polling.
> > >
> > > So you *could* consume the supplied user_data to know what to poll for
> > > subsequent config changes without cloud-init or os-collect-config, but
> you
> > > would have to describe what you're doing in detail for us to know if
> that
> > > sounds like a good idea.
> > >
> > >
> > >
> > >  Thanks
> > > Susaant
> > >
> > >
> > > On Fri, Dec 13, 2013 at 3:46 PM, Steve Baker 
> wrote:
> > >
> > >>  I've been working on a POC in heat for resources which perform
> software
> > >> configuration, with the aim of implementing this spec
> > >>
> https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
> > >>
> > >> The code to date is here:
> > >> https://review.openstack.org/#/q/topic:bp/hot-software-config,n,z
> > >>
> > >> What would be helpful now is reviews which give the architectural
> > >> approach enough of a blessing to justify fleshing this POC out into a
> ready
> > >> to merge changeset.
> > >>
> > >> Currently it is possible to:
> > >> - create templates containing OS::Heat::SoftwareConfig and
> > >> OS::Heat::SoftwareDeployment resources
> > >> - deploy configs to OS::Nova::Server, where the deployment resource
> > >> remains in an IN_PROGRESS state until it is signalled with the output
> values
> > >> - write configs which execute shell scripts and report

[openstack-dev] Discuss the option delete_on_termination

2014-01-08 Thread 黎林果
Hi All,

   Attach a volume when creating a server, the API contains
'block_device_mapping', such as:
"block_device_mapping": [
{
"volume_id": "",
"device_name": "/dev/vdc",
"delete_on_termination": "true"
}
]

It allows the option 'delete_on_termination', but in the code it's
hardcoded to True. Why?

Another situation, attach a volume to an exists server, there is
not the option 'delete_on_termination'.

  Should we add the 'delete_on_termination' when attach a volume to an
exists server or modify the value from the params?


  See also:
https://blueprints.launchpad.net/nova/+spec/add-delete-on-termination-option


Best regards!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.config] Centralized config management

2014-01-08 Thread Nachi Ueno
Hi folks

OpenStack process tend to have many config options, and many hosts.
It is a pain to manage this tons of config options.
To centralize this management helps operation.

We can use chef or puppet kind of tools, however
sometimes each process depends on the other processes configuration.
For example, nova depends on neutron configuration etc

My idea is to have config server in oslo.config, and let cfg.CONF get
config from the server.
This way has several benefits.

- We can get centralized management without modification on each
projects ( nova, neutron, etc)
- We can provide horizon for configuration

This is bp for this proposal.
https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

I'm very appreciate any comments on this.

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Ray Sun
Rick,
Thanks for your response.

I make another testing to upload my iso to my ESXi host directly, the speed
is much faster now, avg is more than 40 MB/s. By the way, 200.21.0.99 is my
vcenter server.

I will keep you update if I have some new found. Thanks a lot.

Best Regards
-- Ray


On Thu, Jan 9, 2014 at 1:09 AM, Rick Jones  wrote:

> On 01/07/2014 06:30 PM, Ray Sun wrote:
>
>> Stackers,
>> I tried to create a new VM using the driver VMwareVCDriver, but I found
>> it's very slow when I try to create a new VM, for example, 7GB Windows
>> Image spent 3 hours.
>>
>> Then I tried to use curl to upload a iso to vcenter directly.
>>
>> curl -H "Expect:" -v --insecure --upload-file
>> windows2012_server_cn_x64.iso
>> "https://administrator:root123.@200.21.0.99/folder/
>> iso/windows2012_server_cn_x64.iso?dcPath=dataCenter&dsName=datastore2"
>>
>> The average speed is 0.8 MB/s.
>>
>> Finally, I tried to use vSpere web client to upload it, it's only 250
>> KB/s.
>>
>> I am not sure if there any special configurations for web interface for
>> vcenter. Please help.
>>
>
> I'm not fully versed in the plumbing, but while you are pushing via curl
> to 200.21.0.99 you might check the netstat statistics at the sending side,
> say once a minute, and see what the TCP retransmission rate happens to be.
>  If 200.21.0.99 has to push the bits to somewhere else you should follow
> that trail back to the point of origin, checking statistics on each node as
> you go.
>
> You could, additionally, try running the likes of netperf (or iperf, but I
> have a natural inclination to suggest netperf...) between the same pairs of
> systems.  If netperf gets significantly better performance then you
> (probably) have an issue at the application layer rather than in the
> networking.
>
> Depending on how things go with those, it may be desirable to get a packet
> trace of the upload via the likes of tcpdump.  It will be very much
> desirable to start the packet trace before the upload so you can capture
> the TCP connection establishment packets (aka the TCP SYNchronize segments)
> as those contain some important pieces of information about the
> capabilities of the connection.
>
> rick jones
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Add a filter between auth_token and v2

2014-01-08 Thread Georgy Okrokvertskhov
Hi,


Here is how we are doing this for Solum:
Keystone auth:
https://github.com/stackforge/solum/blob/master/solum/api/auth.py
Additional Hook: https://review.openstack.org/#/c/64458/ (auth.py for hook
code and config.py for hooks)

Here is an e-mail thread with discussion:
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023524.html

Hope this will help,
Georgy



On Wed, Jan 8, 2014 at 3:02 PM, Pendergrass, Eric
wrote:

> I need to add an additional layer of authorization between auth_token and
> the reporting API.
>
>
>
> I know it’s as simple as creating a WSGI element and adding it to the
> pipeline.  Examining the code I haven’t figured out where to begin doing
> this.
>
>
>
> I’m not using Apache and mod_wsgi, just the reporting API and Pecan.
>
>
>
> Any pointers on where to start and what files control the pipeline would
> be a big help.
>
>
>
> Thanks
>
> Eric
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
2014/1/9 Russell Bryant 

> On 01/08/2014 09:53 AM, John Garbutt wrote:
> > On 8 January 2014 10:02, David Xie  wrote:
> >> In nova/compute/api.py#2289, function resize, there's a parameter named
> >> flavor_id, if it is None, it is considered as cold migration. Thus, nova
> >> should skip resize verifying. However, it doesn't.
> >>
> >> Like Jay said, we should skip this step during cold migration, does it
> make
> >> sense?
> >
> > Not sure.
> >
> >> On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau  wrote:
> >>>
> >>> Greetings,
> >>>
> >>> I have a question related to cold migration.
> >>>
> >>> Now in OpenStack nova, we support live migration, cold migration and
> >>> resize.
> >>>
> >>> For live migration, we do not need to confirm after live migration
> >>> finished.
> >>>
> >>> For resize, we need to confirm, as we want to give end user an
> opportunity
> >>> to rollback.
> >>>
> >>> The problem is cold migration, because cold migration and resize share
> >>> same code path, so once I submit a cold migration request and after
> the cold
> >>> migration finished, the VM will goes to verify_resize state, and I
> need to
> >>> confirm resize. I felt a bit confused by this, why do I need to verify
> >>> resize for a cold migration operation? Why not reset the VM to original
> >>> state directly after cold migration?
> >
> > I think the idea was allow users/admins to check everything went OK,
> > and only delete the original VM when the have confirmed the move went
> > OK.
> >
> > I thought there was an auto_confirm setting. Maybe you want
> > auto_confirm cold migrate, but not auto_confirm resize?
>
> I suppose we could add an API parameter to auto-confirm these things.
> That's probably a good compromise.
>
OK, will use auto-confirm to handle this.


>
> >>> Also, I think that probably we need split compute.api.resize() to two
> >>> apis: one is for resize and the other is for cold migrations.
> >>>
> >>> 1) The VM state can be either ACTIVE and STOPPED for a resize operation
> >>> 2) The VM state must be STOPPED for a cold migrate operation.
> >
> > We just stop the VM them perform the migration.
> > I don't think we need to require its stopped first.
> > Am I missing something?
>
> Don't think so ... I think we should leave it as is.
>
OK, will leave this as it is for now.

>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Add a filter between auth_token and v2

2014-01-08 Thread Pendergrass, Eric
I need to add an additional layer of authorization between auth_token and
the reporting API.  

 

I know it's as simple as creating a WSGI element and adding it to the
pipeline.  Examining the code I haven't figured out where to begin doing
this.

 

I'm not using Apache and mod_wsgi, just the reporting API and Pecan.

 

Any pointers on where to start and what files control the pipeline would be
a big help.

 

Thanks

Eric



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] top gate bugs: a plea for help

2014-01-08 Thread Joe Gordon
Hi All,

As you know the gate has been in particularly bad shape (gate queue over
100!) this week due to a number of factors. One factor is how many major
outstanding bugs we have in the gate.  Below is a list of the top 4 open
gate bugs.

Here are some fun facts about this list:
* All bugs have been open for over a month
* All are nova bugs
* These 4 bugs alone were hit 588 times which averages to 42 hits per day
(data is over two weeks)!

If we want the gate queue to drop and not have to continuously run 'recheck
bug x' we need to fix these bugs.  So I'm looking for volunteers to help
debug and fix these bugs.


best,
Joe

Bug: https://bugs.launchpad.net/bugs/1253896 => message:"SSHTimeout:
Connection to the" AND message:"via SSH timed out." AND
filename:"console.html"
Filed: 2013-11-21
Title: Attempts to verify guests are running via SSH fails. SSH connection
to guest does not work.
Project: Status
  neutron: In Progress
  nova: Triaged
  tempest: Confirmed
Hits
  FAILURE: 243
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-postgres-full: 0.35%
  gate-grenade-dsvm: 0.68%
  gate-tempest-dsvm-neutron: 0.39%
  gate-tempest-dsvm-neutron-isolated: 4.76%
  gate-tempest-dsvm-full: 0.19%

Bug: https://bugs.launchpad.net/bugs/1254890
Fingerprint: message:"Details: Timed out waiting for thing" AND message:"to
become" AND  (message:"ACTIVE" OR message:"in-use" OR message:"available")
Filed: 2013-11-25
Title: "Timed out waiting for thing" causes tempest-dsvm-neutron-* failures
Project: Status
  neutron: Invalid
  nova: Triaged
  tempest: Confirmed
Hits
  FAILURE: 173
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-neutron-isolated: 4.76%
  gate-tempest-dsvm-postgres-full: 0.35%
  gate-tempest-dsvm-large-ops: 0.68%
  gate-tempest-dsvm-neutron-large-ops: 0.70%
  gate-tempest-dsvm-full: 0.19%
  gate-tempest-dsvm-neutron-pg: 3.57%

Bug: https://bugs.launchpad.net/bugs/1257626
Fingerprint: message:"nova.compute.manager Timeout: Timeout while waiting
on RPC response - topic: \"network\", RPC method:
\"allocate_for_instance\"" AND filename:"logs/screen-n-cpu.txt"
Filed: 2013-12-04
Title: Timeout while waiting on RPC response - topic: "network", RPC
method: "allocate_for_instance" info: ""
Project: Status
  nova: Triaged
Hits
  FAILURE: 118
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-large-ops: 0.68%

Bug: https://bugs.launchpad.net/bugs/1254872
Fingerprint: message:"libvirtError: Timed out during operation: cannot
acquire state change lock" AND filename:"logs/screen-n-cpu.txt"
Filed: 2013-11-25
Title: libvirtError: Timed out during operation: cannot acquire state
change lock
Project: Status
  nova: Triaged
Hits
  FAILURE: 54
  SUCCESS: 3
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-postgres-full: 0.35%
  gate-tempest-dsvm-full: 0.19%


Generated with: elastic-recheck-success
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] new (docs) requirement for third party CI

2014-01-08 Thread Matt Riedemann



On 1/8/2014 12:40 PM, Joe Gordon wrote:


On Jan 8, 2014 7:12 AM, "Matt Riedemann" mailto:mrie...@linux.vnet.ibm.com>> wrote:
 >
 > I'd like to propose that we add another item to the list here [1]
that is basically related to what happens when the 3rd party CI job
votes a -1 on your patch.  This would include:
 >
 > 1. Documentation on how to analyze the results and a good overview of
what the job does (like the docs we have for check/gate testing now).
 > 2. How to recheck the specific job if needed, i.e. 'recheck migrations'.
 > 3. Who to contact if you can't figure out what's going on with the job.
 >
 > Ideally this information would be in the comments when the job scores
a -1 on your patch, or at least it would leave a comment with a link to
a wiki for that job like we have with Jenkins today.
 >
 > I'm all for more test coverage but we need some solid documentation
around that when it's not owned by the community so we know what to do
with the results if they seem like false negatives.
 >
 > If no one is against this or has something to add, I'll update the wiki.

-1 to putting this in the wiki. This isn't a nova only issue. We are
trying to collect the requirements here:

https://review.openstack.org/#/c/63478/


Cool, didn't know about that, thanks.  Good discussion going on in 
there, I left my thoughts as well. :)




 >
 > [1]
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements
 >
 > --
 >
 > Thanks,
 >
 > Matt Riedemann
 >
 >
 > ___
 > OpenStack-dev mailing list
 > OpenStack-dev@lists.openstack.org

 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] VMwareAPI sub-team status update 2014-01-08

2014-01-08 Thread Shawn Hartsock
Greetings Stackers!

The VMwareAPI subteam had a two week break from meetings. So happy new
year to all! I hope everyone had a nice break. The Icehouse-2
milestone is coming up January 23rd! That means if you have a patch in
flight right now we need to get you ready for core-reviewers in the
next 2 weeks so, if you have feedback on a patch you've posted try and
get right back on those. If you have an open patch or blueprint
*please* review at least *two* other blueprints besides your own!

Our icehouse-2 list turns out to be rather ambitious. Let's stay on
top of these.

== Blueprint priorities ==

Icehouse-2
Nova
* https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management
* https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support
* https://blueprints.launchpad.net/nova/+spec/autowsdl-repair
* https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
* https://blueprints.launchpad.net/nova/+spec/vmware-iso-boot
* https://blueprints.launchpad.net/nova/+spec/vmware-hot-plug

Glance
*. 
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend

Cinder
* https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type


== Bugs ==

Ordered by bug priority:

* High/Critical, needs review : 'vmware driver does not work with more
than one datacenter in vC'
https://review.openstack.org/62587

* High/Critical, needs review : 'VMware: unnecesary session termination'
https://review.openstack.org/64598

* High/Critical, needs review : 'nova failures when vCenter has
multiple datacenters'
https://review.openstack.org/62587

* High/High, needs review : 'VMware: spawning large amounts of VMs
concurrently sometimes causes "VMDK lock" error'
https://review.openstack.org/63933

* High/High, needs review : 'VMWare: AssertionError: Trying to
re-send() an already-triggered event.'
https://review.openstack.org/54808

* High/High, needs review : 'VMware: timeouts due to nova-compute
stuck at 100% when using deploying 100 VMs'
https://review.openstack.org/60259

* High/High, needs review : 'VMware: possible collision of VNC ports'
https://review.openstack.org/58994

* Medium/High, ready for core : 'VMware: instance names can be edited,
breaks nova-driver lookup'
https://review.openstack.org/59571


== Reviews! ==

Ordered by fitness for review:

== needs one more +2/approval ==

* https://review.openstack.org/53990
title: 'VMware ESX: Boot from volume must not relocate vol'
votes: +2:1, +1:4, -1:0, -2:0. +74 days in progress, revision: 5 is 37 days old


== ready for core ==

* https://review.openstack.org/59571
title: 'VMware: fix instance lookup against vSphere'
votes: +2:0, +1:5, -1:0, -2:0. +37 days in progress, revision: 12 is 6 days old

* https://review.openstack.org/49692
title: 'VMware: iscsi target discovery fails while attaching volumes'
votes: +2:0, +1:5, -1:0, -2:0. +96 days in progress, revision: 13 is 13 days old

* https://review.openstack.org/57519
title: 'VMware: use .get() to access 'summary.accessible''
votes: +2:0, +1:6, -1:0, -2:0. +49 days in progress, revision: 1 is 44 days old

* https://review.openstack.org/57376
title: 'VMware: delete vm snapshot after nova snapshot'
votes: +2:0, +1:6, -1:0, -2:0. +49 days in progress, revision: 4 is 44 days old

* https://review.openstack.org/55070
title: 'VMware: fix rescue with disks are not hot-addable'
votes: +2:0, +1:6, -1:0, -2:0. +66 days in progress, revision: 3 is 27 days old

* https://review.openstack.org/55038
title: 'VMware: bug fix for VM rescue when config drive is config...'
votes: +2:0, +1:5, -1:0, -2:0. +67 days in progress, revision: 5 is 27 days old


[ omitted ... bunch-o-reviews needing vmware people attention ... ]

As an experiment, here's a full listing... for those who care to see it:
https://etherpad.openstack.org/p/vmwareapi-subteam-reviews
... this might also afford people the ability to commentate in interesting ways.

BTW we collaborate as a team on our blueprint priority orders & bug
priorities here:
https://etherpad.openstack.org/p/vmware-subteam-icehouse-2

== Meeting info: ==
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI
** discussion is always: Blueprints then Bugs that need attention.

Happy stacking!

# Shawn.Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple config files for neutron server

2014-01-08 Thread Dan Prince


- Original Message -
> From: "Jay Pipes" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, January 8, 2014 2:29:22 PM
> Subject: Re: [openstack-dev] [Neutron] Multiple config files for neutron 
> server
> 
> On Wed, 2014-01-08 at 07:21 -0500, Sean Dague wrote:
> > On 01/06/2014 02:58 PM, Jay Pipes wrote:
> > > On Mon, 2014-01-06 at 23:45 +0400, Eugene Nikanorov wrote:
> > >> Hi folks,
> > >>
> > >>
> > >> Recently we had a discussion with Sean Dague on the matter.
> > >> Currently Neutron server has a number of configuration files used for
> > >> different purposes:
> > >>  - neutron.conf - main configuration parameters, plugins, db and mq
> > >> connections
> > >>  - plugin.ini - plugin-specific networking settings
> > >>  - conf files for ml2 mechanisms drivers (AFAIK to be able to use
> > >> several mechanism drivers we need to pass all of these conf files to
> > >> neutron server)
> > >>  - services.conf - recently introduced conf-file to gather
> > >> vendor-specific parameters for advanced services drivers.
> > >> Particularly, services.conf was introduced to avoid polluting
> > >> 'generic' neutron.conf with vendor parameters and sections.
> > >>
> > >>
> > >> The discussion with Sean was about whether to add services.conf to
> > >> neutron-server launching command in devstack
> > >> (https://review.openstack.org/#/c/64377/ ). services.conf would be 3rd
> > >> config file that is passed to neutron-server along with neutron.conf
> > >> and plugin.ini.
> > >>
> > >>
> > >> Sean has an argument that providing many conf files in a command line
> > >> is not a good practice, suggesting setting up configuration directory
> > >> instead. There is no such capability in neutron right now so I'd like
> > >> to hear opinions on this before putting more efforts in resolving this
> > >> in with other approach than used in the patch on review.
> > > 
> > > I'd say just put the additional conf file on the command line for now.
> > > Adding in support to oslo.cfg for a config directory can come later.
> > > 
> > > Just my 2 cents,
> > 
> > So the net of that is that in a production environment, in order to
> > change some services, you'd be expected to change the init scripts to
> > list the right config files.
> 
> Good point.
> 
> > That seems *really* weird, and also really different from the rest of
> > OpenStack services. It also means you can't use the oslo config
> > generator to generate documented samples.
> > 
> > If neutron had been running a grenade job, it would have blocked this
> > attempted change, because it would require adding config files between
> > releases.
> > 
> > So this all smells pretty bad to me. Especially in the context of
> > migration paths from nova (which handles this very differently) => neutron.
> 
> So, I was under the impression that the Neutron changes to require a
> services.conf had *already* been merged into master, and therefore the
> problem domain here was not whether the services.conf addition was the
> right approach, but rather *how to deal with it in devstack*, and that's
> why I wrote to just add it to the command line in the devstack builder.
> 
> A better (upstream in Neutron) solution would have been to use something
> like an include.d/ directive in the nova.conf. But I thought that we
> were past the implementation point in Neutron?

Doesn't neutron already support what we need here:

./neutron-server --help | grep config-dir
usage: neutron-server [-h] [--config-dir DIR] [--config-file PATH] [--debug]
  --config-dir DIR  Path to a config directory to pull *.conf files from.

It would seem that with proper organization devstack could take advantage of 
this already no?

> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Windows Support

2014-01-08 Thread Peter Pouliot
Currently I know, alessandro pilotti has done work and has heat templates for 
Windows instances, including deploying ad nodes, exchange and SharePoint.

P

Sent from my Verizon Wireless 4G LTE Smartphone


 Original message 
From: "Chan, Winson C"
Date:01/08/2014 3:47 PM (GMT-05:00)
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Heat] Windows Support

Does anybody know if this blueprint is being actively work on?  
https://blueprints.launchpad.net/heat/+spec/windows-instances  If this is not 
active, can I take ownership of this blueprint?  My team wants to add support 
for Windows in Heat for our internal deployment.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
We're in agreement. What little entry there might be in a system of such
a small size would be entirely manageable by a single administrator...

I care about that deployment, deeply, as that is how things like OpenStack
take root in IT departments.. with somebody playing around. However, what
I care more about is that when that deployment goes from POC to reality,
it can scale up to tens of admins and thousands of machines. If it cannot,
if the user finds themselves doing things manually and handling problems
by poking packages out to small classes of machines, then we have failed
and OpenStack will be very costly for any org to scale out.

Excerpts from Fox, Kevin M's message of 2014-01-08 09:22:15 -0800:
> Let me give you a more concrete example, since you still think one size fits 
> all here.
> 
> I am using OpenStack on my home server now. In the past, I had one machine 
> with lots of services on it. At times, I would update one service and during 
> the update process, a different service would break.
> 
> Last round of hardware purchasing got me an 8 core desktop processor with 16 
> gigs of ram. Enough to give every service I have its own processor and 2 gigs 
> of ram. So, I decided to run OpenStack on the server to manage the service 
> vm's.
> 
> The base server  shares out my shared data with nfs, the vm's then re-export 
> it in various ways like samba, dlna to my ps3, etc.
> 
> Now, I could create a golden image for each service type with everything all 
> setup and good to go. And infrastructure to constantly build updated ones.
> 
> But in this case, grabbing Fedora cloud image or Ubuntu cloud image, and 
> starting up the service with heat and a couple of line cloud init telling it 
> to install just the package for the one service I need saves a ton of effort 
> and space. The complexity is totally on the distro folks and not me. Very 
> simple to maintain.
> 
> I can get almost the stability of the golden image simply by pausing the 
> working service vm, spawning a new one, and only if its sane, switch to it 
> and delete the old. In fact, Heat is working towards (if not already done) 
> having Heat itself do this process for you.
> 
> I'm all for golden images as a tool. We use them a lot. Like all tools 
> though, there isn't one "works for all cases best" tool.
> 
> I hope this use case helps.
> 
> Thanks,
> Kevin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Jan 9 1800 UTC

2014-01-08 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt
channel.

Agenda:
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_January.2C_9

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meeting&iso=20140109T18

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi Doug,
OK, so like I said, we did not design the system with the idea that a user of 
the cloud (rather than the deployer of the cloud) would have any control over 
what data was collected. They can ask questions about only some of the data, 
but they can't tell ceilometer what to collect.

There's a certain amount of danger in giving the cloud user (no matter their 
role) an "off switch" for the data collection. As Julien pointed out, it can 
have a negative effect on billing -- if they tell the cloud not to collect data 
about what instances are created, then the deployer can't bill for those 
instances. Differentiating between the values that always must be collected and 
the ones the user can control makes providing an API to manage data collection 
more complex.

Is there some underlying use case behind all of this that someone could 
describe in more detail, so we might be able to find an alternative, or explain 
how to use the existing features to achieve the goal? For example, it is 
already possible to change the pipeline config file to control which data is 
collected and stored. If we make the pipeline code in ceilometer watch for 
changes to that file, and rebuild the pipelines when the config is updated, 
would that satisfy the requirements?

ildikov: Thanks for the clarification. The base idea was to provide the 
possibility of different data collection configuration for projects. Reflecting 
to the dynamic meter configuration and the possible API based solution, it 
seemed to be possible to provide the configuration possibility to the users of 
the cloud as well. At this point, I haven't considered the billing aspect, 
which would be affected by this extra option, as you mentioned it above, so it 
was definitely a wrong direction. Finally we've reached a consensus with Julien 
by making the required changes in pipeline.yaml and the related codebase.

ildikov: Thanks to both of you for the effort in clarifying this.

Best Regards,
Ildiko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Windows Support

2014-01-08 Thread Chan, Winson C
Does anybody know if this blueprint is being actively work on?  
https://blueprints.launchpad.net/heat/+spec/windows-instances  If this is not 
active, can I take ownership of this blueprint?  My team wants to add support 
for Windows in Heat for our internal deployment.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 3:09 PM, Kodam, Vijayakumar (EXT-Tata Consultancy
Ser - FI/Espoo)  wrote:

>>
>
>  
> >
>  >From: ext Doug Hellmann [doug.hellm...@dreamhost.com]
>  >Sent: Wednesday, January 08, 2014 8:26 PM
>
>  >To: OpenStack Development Mailing List (not for usage questions)
>  >Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>  >
>  >
>  >On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa <
> ildiko.van...@ericsson.com> wrote:
>  >
>  >Hi Doug,
>  >
>  >Answers inline again.
>  >
>  >Best Regards,
>  >
>  >Ildiko
>  >
>  >
>  >On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa <
> ildiko.van...@ericsson.com> wrote:
>  >
>  >Hi,
>  >
>  >I've started to work on the idea of supporting a kind of tenant/project
>  > based configuration for Ceilometer. Unfortunately I haven't reached
>  > the point of having a blueprint that could be registered until now.
>  > I do not have a deep knowledge about the collector and compute agent
>  > services, but this feature would require some deep changes for sure.
>  > Currently there are pipelines for data collection and transformation,
>  > where the counters can be specified, about which data should be
>  > collected and also the time interval for data collection and so on.
>  > These pipelines can be configured now globally in the pipeline.yaml
> file,
>  > which is stored right next to the Ceilometer configuration files.
>  >
>  >Yes, the data collection was designed to be configured and controlled by
>  > the deployer, not the tenant. What benefits do we gain by giving that
>  > control to the tenant?
>  >
>  >ildikov: Sorry, my explanation was not clear. I meant there the
> configuration
>  > of data collection for projects, what was mentioned by Tim Bell in a
>  > previous email. This would mean that the project administrator is able
> to
>  > create a data collection configuration for his/her own project, which
> will
>  > not affect the other project’s configuration. The tenant would be able
> to
>  > specify meters (enabled/disable based on which ones are needed) for the
> given
>  > project also with project specific time intervals, etc.
>  >
>  >OK, I think some of the confusion is terminology.
>  >Who is a "project administrator"? Is that someone with access to change
>  > ceilometer's configuration file directly? Someone with a particular role
>  > using the API? Or something else?
>  >
>  >ildikov: As project administrator I meant a user with particular role,
>  > a user assigned to a tenant.
>  >
>  >
>  >OK, so like I said, we did not design the system with the idea that a
>  > user of the cloud (rather than the deployer of the cloud) would have
>  > any control over what data was collected. They can ask questions about
>  > only some of the data, but they can't tell ceilometer what to collect.
>  >There's a certain amount of danger in giving the cloud user
>  > (no matter their role) an "off switch" for the data collection.
>  >
>  > As Julien pointed out, it can have a negative effect on billing
>  > -- if they tell the cloud not to collect data about what instances
>  > are created, then the deployer can't bill for those instances.
>  > Differentiating between the values that always must be collected and
>  > the ones the user can control makes providing an API to manage data
>  > collection more complex.
>  >
>  >Is there some underlying use case behind all of this that someone could
>  > describe in more detail, so we might be able to find an alternative, or
>  > explain how to use the existing features to achieve the goal?
>  >
>  > For example, it is already possible to change the pipeline config file
>  > to control which data is collected and stored.
>  > If we make the pipeline code in ceilometer watch for changes to that
> file,
>  > and rebuild the pipelines when the config is updated,
>  > would that satisfy the requirements?
>  >
>
>  Yes. That's exactly the requirement for our blueprint. To avoid
> ceilometer restart for changes to take effect, when the config file
> changes.
> API support was added later based on the request in this mail chain. We
> actually don't need APIs and can be removed.
>
> So as you mentioned above, whenever the config file is changed, ceilometer
> should update the meters accordingly.
>

OK, I think that's something reasonable to implement, although I would
have to look at the collector to make sure we could rebuild the pipelines
safely without losing any data as more messages come in. But it should be
possible, if not easy. :-)

The blueprint should be updated to reflect this approach.

Doug



>
>
>  >
>  >
>  >In my view, we could keep the dynamic meter configuration bp with
> considering
>  > to extend it to dynamic configuration of Ceilometer, not just the
> meters and
>  > we could have a separate bp for the project based configuration of
> meters.
>  >Ceilometer uses oslo.config, just like all of the r

Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 2:08 PM, Tim Bell  wrote:

>
>
> Thanks for the clarifications. Given the role descriptions as provided, I
> no longer think there is a need for an API call or per project meter
> enable/disable. Thus, the inotify approach would seem to be viable (and
> much simpler to implement since the state is clearly defined across daemon
> restarts)
>
Good, thanks, Tim.

Doug




>
>
> Tim
>
>
>
>
>
> *From:* Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
> *Sent:* 08 January 2014 19:27
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>
>
>
>
>
>
>
> On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa 
> wrote:
>
>  Hi Doug,
>
>
>
> Answers inline again.
>
>
>
> Best Regards,
>
> Ildiko
>
>
>
> On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
> wrote:
>
> Hi,
>
> I've started to work on the idea of supporting a kind of tenant/project
> based configuration for Ceilometer. Unfortunately I haven't reached the
> point of having a blueprint that could be registered until now. I do not
> have a deep knowledge about the collector and compute agent services, but
> this feature would require some deep changes for sure. Currently there are
> pipelines for data collection and transformation, where the counters can be
> specified, about which data should be collected and also the time interval
> for data collection and so on. These pipelines can be configured now
> globally in the pipeline.yaml file, which is stored right next to the
> Ceilometer configuration files.
>
>
>
> Yes, the data collection was designed to be configured and controlled by
> the deployer, not the tenant. What benefits do we gain by giving that
> control to the tenant?
>
>
>
> ildikov: Sorry, my explanation was not clear. I meant there the
> configuration of data collection for projects, what was mentioned by Tim
> Bell in a previous email. This would mean that the project administrator is
> able to create a data collection configuration for his/her own project,
> which will not affect the other project’s configuration. The tenant would
> be able to specify meters (enabled/disable based on which ones are needed)
> for the given project also with project specific time intervals, etc.
>
>
>
> OK, I think some of the confusion is terminology. Who is a "project
> administrator"? Is that someone with access to change ceilometer's
> configuration file directly? Someone with a particular role using the API?
> Or something else?
>
>
>
> ildikov: As project administrator I meant a user with particular role, a
> user assigned to a tenant.
>
>
>
> OK, so like I said, we did not design the system with the idea that a user
> of the cloud (rather than the deployer of the cloud) would have any control
> over what data was collected. They can ask questions about only some of the
> data, but they can't tell ceilometer what to collect.
>
>
>
> There's a certain amount of danger in giving the cloud user (no matter
> their role) an "off switch" for the data collection. As Julien pointed out,
> it can have a negative effect on billing -- if they tell the cloud not to
> collect data about what instances are created, then the deployer can't bill
> for those instances. Differentiating between the values that always must be
> collected and the ones the user can control makes providing an API to
> manage data collection more complex.
>
>
>
> Is there some underlying use case behind all of this that someone could
> describe in more detail, so we might be able to find an alternative, or
> explain how to use the existing features to achieve the goal? For example,
> it is already possible to change the pipeline config file to control which
> data is collected and stored. If we make the pipeline code in ceilometer
> watch for changes to that file, and rebuild the pipelines when the config
> is updated, would that satisfy the requirements?
>
>
>
>  In my view, we could keep the dynamic meter configuration bp with
> considering to extend it to dynamic configuration of Ceilometer, not just
> the meters and we could have a separate bp for the project based
> configuration of meters.
>
>
>
> Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
> are the needs for dynamic configuration updates in ceilometer different
> from the other services?
>
>
>
> ildikov: There are some parameters in the configuration file of
> Ceilometer, like log options and notification types, which would be good to
> be able to configure them dynamically. I just wanted to reflect to that
> need. As I see, there are two options here. The first one is to identify
> the group of the dynamically modifiable parameters and move them to the API
> level. The other option could be to make some modifications in oslo.config
> too, so other services also could use the benefits of dynamic
> configuration. For example the log settings could be a good candidate, as
> for example the change of 

Re: [openstack-dev] [nova][infra] nova py27 unit test failures in libvirt

2014-01-08 Thread Jeremy Stanley
On 2014-01-07 07:17:58 -0500 (-0500), Sean Dague wrote:
> This looks like it's a 100% failure bug at this point. I expect that
> because of timing it's based on a change in the base image due to
> nodepool rebuilding.

Actually not... Nova's Python 2.7 unit tests don't run on
nodepool-managed workers, just static (manually built, long-running)
Ubuntu VMs.

But the bug has the details at this point. In short, lurking
misconfiguration triggered by an update to install libvirt-dev
caused latest libvirt from Ubuntu Cloud Archive to be installed and
Nova doesn't support newer libvirt versions.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
 >
 >
 >From: ext Doug Hellmann [doug.hellm...@dreamhost.com]
 >Sent: Wednesday, January 08, 2014 8:26 PM
 >To: OpenStack Development Mailing List (not for usage questions)
 >Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 >
 >
 >On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa 
 >mailto:ildiko.van...@ericsson.com>> wrote:
 >
 >Hi Doug,
 >
 >Answers inline again.
 >
 >Best Regards,
 >
 >Ildiko
 >
 >
 >On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
 >mailto:ildiko.van...@ericsson.com>> wrote:
 >
 >Hi,
 >
 >I've started to work on the idea of supporting a kind of tenant/project
 > based configuration for Ceilometer. Unfortunately I haven't reached
 > the point of having a blueprint that could be registered until now.
 > I do not have a deep knowledge about the collector and compute agent
 > services, but this feature would require some deep changes for sure.
 > Currently there are pipelines for data collection and transformation,
 > where the counters can be specified, about which data should be
 > collected and also the time interval for data collection and so on.
 > These pipelines can be configured now globally in the pipeline.yaml file,
 > which is stored right next to the Ceilometer configuration files.
 >
 >Yes, the data collection was designed to be configured and controlled by
 > the deployer, not the tenant. What benefits do we gain by giving that
 > control to the tenant?
 >
 >ildikov: Sorry, my explanation was not clear. I meant there the configuration
 > of data collection for projects, what was mentioned by Tim Bell in a
 > previous email. This would mean that the project administrator is able to
 > create a data collection configuration for his/her own project, which will
 > not affect the other project’s configuration. The tenant would be able to
 > specify meters (enabled/disable based on which ones are needed) for the given
 > project also with project specific time intervals, etc.
 >
 >OK, I think some of the confusion is terminology.
 >Who is a "project administrator"? Is that someone with access to change
 > ceilometer's configuration file directly? Someone with a particular role
 > using the API? Or something else?
 >
 >ildikov: As project administrator I meant a user with particular role,
 > a user assigned to a tenant.
 >
 >
 >OK, so like I said, we did not design the system with the idea that a
 > user of the cloud (rather than the deployer of the cloud) would have
 > any control over what data was collected. They can ask questions about
 > only some of the data, but they can't tell ceilometer what to collect.
 >There's a certain amount of danger in giving the cloud user
 > (no matter their role) an "off switch" for the data collection.
 >
 > As Julien pointed out, it can have a negative effect on billing
 > -- if they tell the cloud not to collect data about what instances
 > are created, then the deployer can't bill for those instances.
 > Differentiating between the values that always must be collected and
 > the ones the user can control makes providing an API to manage data
 > collection more complex.
 >
 >Is there some underlying use case behind all of this that someone could
 > describe in more detail, so we might be able to find an alternative, or
 > explain how to use the existing features to achieve the goal?
 >
 > For example, it is already possible to change the pipeline config file
 > to control which data is collected and stored.
 > If we make the pipeline code in ceilometer watch for changes to that file,
 > and rebuild the pipelines when the config is updated,
 > would that satisfy the requirements?
 >

Yes. That's exactly the requirement for our blueprint. To avoid ceilometer 
restart for changes to take effect, when the config file changes.
API support was added later based on the request in this mail chain. We 
actually don't need APIs and can be removed.

So as you mentioned above, whenever the config file is changed, ceilometer 
should update the meters accordingly.



 >
 >
 >In my view, we could keep the dynamic meter configuration bp with considering
 > to extend it to dynamic configuration of Ceilometer, not just the meters and
 > we could have a separate bp for the project based configuration of meters.
 >Ceilometer uses oslo.config, just like all of the rest of OpenStack. How are
 > the needs for dynamic configuration updates in ceilometer different from
 > the other services?
 >
 >
 >ildikov: There are some parameters in the configuration file of Ceilometer,
 > like log options and notification types, which would be good to be able to
 > configure them dynamically. I just wanted to reflect to that need. As I see,
 > there are two options here. The first one is to identify the group of the
 > dynamically modifiable parameters and move them to the API level. The other
 > option could be to make some modifications in oslo.config too, so other
 > services also 

Re: [openstack-dev] [nova] Change I005e752c: Whitelist external netaddr requirement, for bug 1266513, ineffective for me

2014-01-08 Thread Jeremy Stanley
Note that, per the most recent updates in the bug, netaddr has
started uploading their releases to PyPI again so we should
hopefully be able to revert any workarounds we added for it. This
unfortunately does not hold true for other requirements of some
projects (netifaces in swift, lazr.restful in reviewday and
elastic-recheck, et cetera), so we need to keep plugging the hole
with workarounds there in the meantime.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Georgy Okrokvertskhov
Hi,

Keep policy control in one place is a good idea. We can use standard policy
approach and keep access control configuration in json file as it done in
Nova and other projects.
Keystone uses wrapper function for methods. Here is a wrapper code:
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111.
Each controller method has @protected() wrapper, so a method information is
available through python f.__name__ instead of URL parsing. It means that
some RBAC parts anyway scattered among the code.

If we want to avoid RBAC scattered among the code we can use URL parsing
approach and have all the logic inside hook. In pecan hook WSGI environment
is already created and there is full access to request parameters\content.
We can map URL to policy key.

So we have two options:
1. Add wrapper to each API method like all other project did
2. Add a hook with URL parsing which maps path to policy key.


Thanks
Georgy



On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths  wrote:

>  Yeah, that could work. The main thing is to try and keep policy control
> in one place if you can rather than sprinkling it all over the place.
>
>   From: Georgy Okrokvertskhov 
> Reply-To: OpenStack Dev 
> Date: Wednesday, January 8, 2014 at 10:41 AM
>
> To: OpenStack Dev 
> Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
> SecureController vs. Nova policy
>
>   Hi Kurt,
>
>  As for WSGI middleware I think about Pecan hooks which can be added
> before actual controller call. Here is an example how we added a hook for
> keystone information collection:
> https://review.openstack.org/#/c/64458/4/solum/api/auth.py
>
>  What do you think, will this approach with Pecan hooks work?
>
>  Thanks
> Georgy
>
>
> On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths <
> kurt.griffi...@rackspace.com> wrote:
>
>>  You might also consider doing this in WSGI middleware:
>>
>>  Pros:
>>
>>- Consolidates policy code in once place, making it easier to audit
>>and maintain
>>- Simple to turn policy on/off – just don’t insert the middleware
>>when off!
>>- Does not preclude the use of oslo.policy for rule checking
>>- Blocks unauthorized requests before they have a chance to touch the
>>web framework or app. This reduces your attack surface and can improve
>>performance   (since the web framework has yet to parse the request).
>>
>> Cons:
>>
>>- Doesn't work for policies that require knowledge that isn’t
>>available this early in the pipeline (without having to duplicate a lot of
>>code)
>>- You have to parse the WSGI environ dict yourself (this may not be a
>>big deal, depending on how much knowledge you need to glean in order to
>>enforce the policy).
>>- You have to keep your HTTP path matching in sync with with your
>>route definitions in the code. If you have full test coverage, you will
>>know when you get out of sync. That being said, API routes tend to be 
>> quite
>>stable in relation to to other parts of the code implementation once you
>>have settled on your API spec.
>>
>> I’m sure there are other pros and cons I missed, but you can make your
>> own best judgement whether this option makes sense in Solum’s case.
>>
>>   From: Doug Hellmann 
>> Reply-To: OpenStack Dev 
>> Date: Tuesday, January 7, 2014 at 6:54 AM
>> To: OpenStack Dev 
>> Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
>> SecureController vs. Nova policy
>>
>>
>>
>>
>> On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov <
>> gokrokvertsk...@mirantis.com> wrote:
>>
>>> Hi Dough,
>>>
>>>  Thank you for pointing to this code. As I see you use OpenStack policy
>>> framework but not Pecan security features. How do you implement fine grain
>>> access control like user allowed to read only, writers and admins. Can you
>>> block part of API methods for specific user like access to create methods
>>> for specific user role?
>>>
>>
>>  The policy enforcement isn't simple on/off switching in ceilometer, so
>> we're using the policy framework calls in a couple of places within our API
>> code (look through v2.py for examples). As a result, we didn't need to
>> build much on top of the existing policy module to interface with pecan.
>>
>>  For your needs, it shouldn't be difficult to create a couple of
>> decorators to combine with pecan's hook framework to enforce the policy,
>> which might be less complex than trying to match the operating model of the
>> policy system to pecan's security framework.
>>
>>  This is the sort of thing that should probably go through Oslo and be
>> shared, so please consider contributing to the incubator when you have
>> something working.
>>
>>  Doug
>>
>>
>>
>>>
>>>  Thanks
>>> Georgy
>>>
>>>
>>> On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann <
>>> doug.hellm...@dreamhost.com> wrote:
>>>



  On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov <
 gokrokvertsk...@mirantis.com> wrote:

>  Hi,
>
>  In Solu

Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Sergey Skripnick






On Wed, Jan 8, 2014 at 10:43 AM, Eric Windisch   
wrote:










About spur: spur is looks ok, but it a bit complicated inside (it uses

separate threads for non-blocking stdin/stderr reading [1]) and I  
don't


know how it would work with eventlet.


That does sound like it might cause issues. What would we need to do  
to test it?









Looking at the code, I don't expect it to be an issue. The  
monkey-patching will cause eventlet.spawn >>to be called for  
threading.Thread. The code looks eventlet-friendly enough on the  
surface. Error >>handing around file read/write could be affected, but  
it also looks fine.




Thanks for that analysis Eric.

Is there any reason for us to prefer one approach over the other, then?

Doug


So, there is only one reason left -- oslo lib is more simple and  
lightweight

(not using threads). Anyway this class is used by stackforge/rally and
may be used by other projects instead of buggy oslo.processutils.ssh.



--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple config files for neutron server

2014-01-08 Thread Jay Pipes
On Wed, 2014-01-08 at 07:21 -0500, Sean Dague wrote:
> On 01/06/2014 02:58 PM, Jay Pipes wrote:
> > On Mon, 2014-01-06 at 23:45 +0400, Eugene Nikanorov wrote:
> >> Hi folks,
> >>
> >>
> >> Recently we had a discussion with Sean Dague on the matter.
> >> Currently Neutron server has a number of configuration files used for
> >> different purposes:
> >>  - neutron.conf - main configuration parameters, plugins, db and mq
> >> connections
> >>  - plugin.ini - plugin-specific networking settings
> >>  - conf files for ml2 mechanisms drivers (AFAIK to be able to use
> >> several mechanism drivers we need to pass all of these conf files to
> >> neutron server)
> >>  - services.conf - recently introduced conf-file to gather
> >> vendor-specific parameters for advanced services drivers.
> >> Particularly, services.conf was introduced to avoid polluting
> >> 'generic' neutron.conf with vendor parameters and sections.
> >>
> >>
> >> The discussion with Sean was about whether to add services.conf to
> >> neutron-server launching command in devstack
> >> (https://review.openstack.org/#/c/64377/ ). services.conf would be 3rd
> >> config file that is passed to neutron-server along with neutron.conf
> >> and plugin.ini.
> >>
> >>
> >> Sean has an argument that providing many conf files in a command line
> >> is not a good practice, suggesting setting up configuration directory
> >> instead. There is no such capability in neutron right now so I'd like
> >> to hear opinions on this before putting more efforts in resolving this
> >> in with other approach than used in the patch on review.
> > 
> > I'd say just put the additional conf file on the command line for now.
> > Adding in support to oslo.cfg for a config directory can come later.
> > 
> > Just my 2 cents,
> 
> So the net of that is that in a production environment, in order to
> change some services, you'd be expected to change the init scripts to
> list the right config files.

Good point.

> That seems *really* weird, and also really different from the rest of
> OpenStack services. It also means you can't use the oslo config
> generator to generate documented samples.
> 
> If neutron had been running a grenade job, it would have blocked this
> attempted change, because it would require adding config files between
> releases.
> 
> So this all smells pretty bad to me. Especially in the context of
> migration paths from nova (which handles this very differently) => neutron.

So, I was under the impression that the Neutron changes to require a
services.conf had *already* been merged into master, and therefore the
problem domain here was not whether the services.conf addition was the
right approach, but rather *how to deal with it in devstack*, and that's
why I wrote to just add it to the command line in the devstack builder.

A better (upstream in Neutron) solution would have been to use something
like an include.d/ directive in the nova.conf. But I thought that we
were past the implementation point in Neutron?

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-08 Thread Jay Pipes
On Wed, 2014-01-08 at 14:26 +0100, Thierry Carrez wrote:
> Tim Bell wrote:
> > +1 from me too UpgradeImpact is a much better term.
> 
> So this one is already documented[1], but I don't know if it actually
> triggers anything yet.
> 
> Should we configure it to post to openstack-operators, the same way as
> SecurityImpact posts to openstack-security ?

Huge +1 from me here.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Tim Bell


Thanks for the clarifications. Given the role descriptions as provided, I no 
longer think there is a need for an API call or per project meter 
enable/disable. Thus, the inotify approach would seem to be viable (and much 
simpler to implement since the state is clearly defined across daemon restarts)



Tim


From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
Sent: 08 January 2014 19:27
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer



On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa 
mailto:ildiko.van...@ericsson.com>> wrote:
Hi Doug,

Answers inline again.

Best Regards,
Ildiko

On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
mailto:ildiko.van...@ericsson.com>> wrote:
Hi,

I've started to work on the idea of supporting a kind of tenant/project based 
configuration for Ceilometer. Unfortunately I haven't reached the point of 
having a blueprint that could be registered until now. I do not have a deep 
knowledge about the collector and compute agent services, but this feature 
would require some deep changes for sure. Currently there are pipelines for 
data collection and transformation, where the counters can be specified, about 
which data should be collected and also the time interval for data collection 
and so on. These pipelines can be configured now globally in the pipeline.yaml 
file, which is stored right next to the Ceilometer configuration files.

Yes, the data collection was designed to be configured and controlled by the 
deployer, not the tenant. What benefits do we gain by giving that control to 
the tenant?

ildikov: Sorry, my explanation was not clear. I meant there the configuration 
of data collection for projects, what was mentioned by Tim Bell in a previous 
email. This would mean that the project administrator is able to create a data 
collection configuration for his/her own project, which will not affect the 
other project's configuration. The tenant would be able to specify meters 
(enabled/disable based on which ones are needed) for the given project also 
with project specific time intervals, etc.

OK, I think some of the confusion is terminology. Who is a "project 
administrator"? Is that someone with access to change ceilometer's 
configuration file directly? Someone with a particular role using the API? Or 
something else?

ildikov: As project administrator I meant a user with particular role, a user 
assigned to a tenant.

OK, so like I said, we did not design the system with the idea that a user of 
the cloud (rather than the deployer of the cloud) would have any control over 
what data was collected. They can ask questions about only some of the data, 
but they can't tell ceilometer what to collect.

There's a certain amount of danger in giving the cloud user (no matter their 
role) an "off switch" for the data collection. As Julien pointed out, it can 
have a negative effect on billing -- if they tell the cloud not to collect data 
about what instances are created, then the deployer can't bill for those 
instances. Differentiating between the values that always must be collected and 
the ones the user can control makes providing an API to manage data collection 
more complex.

Is there some underlying use case behind all of this that someone could 
describe in more detail, so we might be able to find an alternative, or explain 
how to use the existing features to achieve the goal? For example, it is 
already possible to change the pipeline config file to control which data is 
collected and stored. If we make the pipeline code in ceilometer watch for 
changes to that file, and rebuild the pipelines when the config is updated, 
would that satisfy the requirements?

In my view, we could keep the dynamic meter configuration bp with considering 
to extend it to dynamic configuration of Ceilometer, not just the meters and we 
could have a separate bp for the project based configuration of meters.

Ceilometer uses oslo.config, just like all of the rest of OpenStack. How are 
the needs for dynamic configuration updates in ceilometer different from the 
other services?

ildikov: There are some parameters in the configuration file of Ceilometer, 
like log options and notification types, which would be good to be able to 
configure them dynamically. I just wanted to reflect to that need. As I see, 
there are two options here. The first one is to identify the group of the 
dynamically modifiable parameters and move them to the API level. The other 
option could be to make some modifications in oslo.config too, so other 
services also could use the benefits of dynamic configuration. For example the 
log settings could be a good candidate, as for example the change of log 
levels, without service restart, in case debugging the system can be a useful 
feature for all of the OpenStack services.

I "misspoke" earlier. If we're talking about meters, those are actually defined 
by the pipeline fil

Re: [openstack-dev] [nova] new (docs) requirement for third party CI

2014-01-08 Thread Joe Gordon
On Jan 8, 2014 7:12 AM, "Matt Riedemann"  wrote:
>
> I'd like to propose that we add another item to the list here [1] that is
basically related to what happens when the 3rd party CI job votes a -1 on
your patch.  This would include:
>
> 1. Documentation on how to analyze the results and a good overview of
what the job does (like the docs we have for check/gate testing now).
> 2. How to recheck the specific job if needed, i.e. 'recheck migrations'.
> 3. Who to contact if you can't figure out what's going on with the job.
>
> Ideally this information would be in the comments when the job scores a
-1 on your patch, or at least it would leave a comment with a link to a
wiki for that job like we have with Jenkins today.
>
> I'm all for more test coverage but we need some solid documentation
around that when it's not owned by the community so we know what to do with
the results if they seem like false negatives.
>
> If no one is against this or has something to add, I'll update the wiki.

-1 to putting this in the wiki. This isn't a nova only issue. We are trying
to collect the requirements here:

https://review.openstack.org/#/c/63478/

>
> [1]
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-08 Thread Samuel Merritt

On 1/7/14 2:53 PM, Michael Still wrote:

Hi. Thanks for reaching out about this.

It seems this patch has now passed turbo hipster, so I am going to
treat this as a more theoretical question than perhaps you intended. I
should note though that Joshua Hesketh and I have been trying to read
/ triage every turbo hipster failure, but that has been hard this week
because we're both at a conference.

The problem this patch faced is that we are having trouble defining
what is a reasonable amount of time for a database migration to run
for. Specifically:

2014-01-07 14:59:32,012 [output] 205 -> 206...
2014-01-07 14:59:32,848 [heartbeat]
2014-01-07 15:00:02,848 [heartbeat]
2014-01-07 15:00:32,849 [heartbeat]
2014-01-07 15:00:39,197 [output] done

So applying migration 206 took slightly over a minute (67 seconds).
Our historical data (mean + 2 standard deviations) says that this
migration should take no more than 63 seconds. So this only just
failed the test.


It seems to me that requiring a runtime less than (mean + 2 stddev) 
leads to a false-positive rate of 1 in 40, right? If the runtimes have a 
normal(-ish) distribution, then 95% of them will be within 2 standard 
deviations of the mean, so that's 1 in 20 falling outside that range. 
Then discard the ones that are faster than (mean - 2 stddev), and that 
leaves 1 in 40. Please correct me if I'm wrong; I'm no statistician.


Such a high false-positive may make it too easy to ignore turbo hipster 
as the bot that cried wolf. This problem already exists with Jenkins and 
the devstack/tempest tests; when one of those fails, I don't wonder what 
I broke, but rather how many times I'll have to recheck the patch until 
the tests pass.


Unfortunately, I don't have a solution to offer, but perhaps someone 
else will.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa
wrote:

>  Hi Doug,
>
>
>
> Answers inline again.
>
>
>
> Best Regards,
>
> Ildiko
>
>
>
> On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
> wrote:
>
> Hi,
>
> I've started to work on the idea of supporting a kind of tenant/project
> based configuration for Ceilometer. Unfortunately I haven't reached the
> point of having a blueprint that could be registered until now. I do not
> have a deep knowledge about the collector and compute agent services, but
> this feature would require some deep changes for sure. Currently there are
> pipelines for data collection and transformation, where the counters can be
> specified, about which data should be collected and also the time interval
> for data collection and so on. These pipelines can be configured now
> globally in the pipeline.yaml file, which is stored right next to the
> Ceilometer configuration files.
>
>
>
> Yes, the data collection was designed to be configured and controlled by
> the deployer, not the tenant. What benefits do we gain by giving that
> control to the tenant?
>
>
>
> ildikov: Sorry, my explanation was not clear. I meant there the
> configuration of data collection for projects, what was mentioned by Tim
> Bell in a previous email. This would mean that the project administrator is
> able to create a data collection configuration for his/her own project,
> which will not affect the other project’s configuration. The tenant would
> be able to specify meters (enabled/disable based on which ones are needed)
> for the given project also with project specific time intervals, etc.
>
>
>
> OK, I think some of the confusion is terminology. Who is a "project
> administrator"? Is that someone with access to change ceilometer's
> configuration file directly? Someone with a particular role using the API?
> Or something else?
>
>
>
> ildikov: As project administrator I meant a user with particular role, a
> user assigned to a tenant.
>

OK, so like I said, we did not design the system with the idea that a user
of the cloud (rather than the deployer of the cloud) would have any control
over what data was collected. They can ask questions about only some of the
data, but they can't tell ceilometer what to collect.

There's a certain amount of danger in giving the cloud user (no matter
their role) an "off switch" for the data collection. As Julien pointed out,
it can have a negative effect on billing -- if they tell the cloud not to
collect data about what instances are created, then the deployer can't
bill for those instances. Differentiating between the values that always
must be collected and the ones the user can control makes providing an API
to manage data collection more complex.

Is there some underlying use case behind all of this that someone could
describe in more detail, so we might be able to find an alternative, or
explain how to use the existing features to achieve the goal? For example,
it is already possible to change the pipeline config file to control which
data is collected and stored. If we make the pipeline code in ceilometer
watch for changes to that file, and rebuild the pipelines when the config
is updated, would that satisfy the requirements?

In my view, we could keep the dynamic meter configuration bp with
> considering to extend it to dynamic configuration of Ceilometer, not just
> the meters and we could have a separate bp for the project based
> configuration of meters.
>
>
>
> Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
> are the needs for dynamic configuration updates in ceilometer different
> from the other services?
>
>
>
> ildikov: There are some parameters in the configuration file of
> Ceilometer, like log options and notification types, which would be good to
> be able to configure them dynamically. I just wanted to reflect to that
> need. As I see, there are two options here. The first one is to identify
> the group of the dynamically modifiable parameters and move them to the API
> level. The other option could be to make some modifications in oslo.config
> too, so other services also could use the benefits of dynamic
> configuration. For example the log settings could be a good candidate, as
> for example the change of log levels, without service restart, in case
> debugging the system can be a useful feature for all of the OpenStack
> services.
>
>
>
> I "misspoke" earlier. If we're talking about meters, those are actually
> defined by the pipeline file (not oslo.config). So if we do want that file
> re-read automatically, we can implement that within ceilometer itself,
> though I'm still reluctant to say we want to provide API access for
> modifying those settings. That's *really* not something we've designed the
> rest of the system to accommodate, so I don't know what side-effects we
> might introduce.
>
>
>
> ildikov: In case of oslo.config, I meant the ceilometer.conf file in my
> answer above, not pipeline.yaml. As for the API part, 

Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec  wrote:

> On 2014-01-08 11:16, Sean Dague wrote:
>
>> On 01/08/2014 12:06 PM, Doug Hellmann wrote:
>> 
>>
>>> Yeah, that's what made me start thinking oslo.sphinx should be called
>>> something else.
>>>
>>> Sean, how strongly do you feel about not installing oslo.sphinx in
>>> devstack? I see your point, I'm just looking for alternatives to the
>>> hassle of renaming oslo.sphinx.
>>>
>>
>> Doing the git thing is definitely not the right thing. But I guess I got
>> lost somewhere along the way about what the actual problem is. Can
>> someone write that up concisely? With all the things that have been
>> tried/failed, why certain things fail, etc.
>>
>
> The problem seems to be when we pip install -e oslo.config on the system,
> then pip install oslo.sphinx in a venv.  oslo.config is unavailable in the
> venv, apparently because the namespace package for o.s causes the egg-link
> for o.c to be ignored.  Pretty much every other combination I've tried
> (regular pip install of both, or pip install -e of both, regardless of
> where they are) works fine, but there seem to be other issues with all of
> the other options we've explored so far.
>
> We can't remove the pip install -e of oslo.config because it has to be
> used for gating, and we can't pip install -e oslo.sphinx because it's not a
> runtime dep so it doesn't belong in the gate.  Changing the toplevel
> package for oslo.sphinx was also mentioned, but has obvious drawbacks too.
>
> I think that about covers what I know so far.


Here's a link dstufft provided to the pip bug tracking this problem:
https://github.com/pypa/pip/issues/3

Doug



>
>
> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi,

> > My idea was just about providing the possibility to configure the data 
> > collection in Ceilometer differently for the different tenants, I 
> > didn't mean to link it to an API or at least not on the first place. 
> > It could be done by the operator as well, for instance, if the polling 
> > frequency should be different in case of tenants.
>
> Yeah, that would work, we would just need to add a list of project to the 
> yaml file. We are already doing that for resources anyway, we can do it for 
> user and project as well.

Ok, that sounds good. Then I will create a blueprint based on this direction.

Best Regards,
Ildiko

> --
> Julien Danjou
> ;; Free Software hacker ; independent consultant ;; http://julien.danjou.info
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-08 Thread Mark Washenberger
On Mon, Jan 6, 2014 at 3:50 PM, Jon Bernard  wrote:

> Hello all,
>
> I would like to propose instance-level snapshots as a feature for
> inclusion in Nova.  An initial draft of the more official proposal is
> here [1], blueprint is here [2].
>
> In a nutshell, this feature will take the existing create-image
> functionality a few steps further by providing the ability to take
> a snapshot of a running instance that includes all of its attached
> volumes.  A coordinated snapshot of multiple volumes for backup
> purposes.  The snapshot operation should occur while the instance is in
> a paused and quiesced state so that each volume snapshot is both
> consistent within itself and with respect to its sibling snapshots.
>
> I still have some open questions on a few topics:
>
> * API changes, two different approaches come to mind:
>
>   1. Nova already has a command `createImage` for creating an image of an
>  existing instance.  This command could be extended to take an
>  additional parameter `all-volumes` that signals the underlying code
>  to capture all attached volumes in addition to the root volume.  The
>  semantic here is important, `createImage` is used to create
>  a template image stored in Glance for later reuse.  If the primary
>  intent of this new feature is for backup only, then it may not be
>  wise to overlap the two operations in this way.  On the other hand,
>  this approach would introduce the least amount of change to the
>  existing API, requiring only modification of an existing command
>  instead of the addition of an entirely new one.
>
>   2. If the feature's primary use is for backup purposes, then a new API
>  call may be a better approach, and leave `createImage` untouched.
>  This new call could be called `createBackup` and take as a parameter
>  the name of the instance.  Although it introduces a new member to the
>  API reference, it would allow this feature to evolve without
>  introducing regressions in any existing calls.  These two calls could
>  share code at some point in the future.
>
> * Existing libvirt support:
>
> To initially support consistent-across-multiple-volumes snapshots,
> we must be able to ask libvirt for a snapshot of an already paused
> guest.  I don't believe such a call is currently supported, so
> changes to libvirt may be a prerequisite for this feature.
>
> Any contribution, comments, and pieces of advice are much appreciated.
>
> [1]: https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots
> [2]: https://blueprints.launchpad.net/nova/+spec/instance-level-snapshots


Hi Jon,

In your specification in the Snapshot Storage section you say "it might be
nice to combine all of the snapshot images into a single OVF file that
contains all volumes attached to the instance at the time of snapshot." I'd
love it if, by the time you get to the point of implementing this storage
part, we have an option available to you in Glance for storing something
akin to an Instance template. An instance template would be an entity
stored in Glance with references to each volume or image that was uploaded
as part of the snapshot. As an example, it could be something like

"instance_template": {
   "/dev/sda": "/v2/images/some-imageid",
   "/dev/sdb": ""
}

Essentially, this kind of storage would bring the OVF metadata up into
Glance rather than burying it down in an image byte stream where it is
harder to search or access.

This is an idea that has been discussed several times before, generally
favorably, and if we move ahead with instance-level snapshots in Nova I'd
love to move quickly to support it in Glance. Part of the reason for the
delay of this feature was my worry that if Glance jumps out ahead, we'll
end up with some instance template format that Nova doesn't really want, so
this opportunity for collaboration on use cases would be fantastic.

If after a bit more discussion in this thread, folks think these templates
in Glance would be a good idea, we can try to draw up a proposal for how to
implement the first cut of this feature in Glance.

Thanks


>
>
> --
> Jon
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam Kamal Malmiyoda
On Wed, Jan 8, 2014 at 11:02 PM, Sean Dague  wrote:
> On 01/08/2014 11:40 AM, Noorul Islam Kamal Malmiyoda wrote:
>>
>> On Jan 8, 2014 9:58 PM, "Georgy Okrokvertskhov"
>> mailto:gokrokvertsk...@mirantis.com>> wrote:
>>>
>>> Hi,
>>>
>>> I do understand why there is a push back for this patch. This patch is
>> for infrastructure project which works for multiple projects. Infra
>> maintainers should not know specifics of each project in details. If
>> this patch is a temporary solution then who will be responsible to
>> remove it?
>>>
>>
>> I am not sure who is responsible for solum related configurations in
>> infra project. I see that almost all the infra config for solum project
>> is done by solum members. So I think any solum member can submit a patch
>> to revert this once we have a permanent solution.
>>
>>> If we need start this gate I propose to revert all patches which led
>> to this inconsistent state and apply workaround in Solum repository
>> which is under Solum team full control and review. We need to open a bug
>> in Solum project to track this.
>>>
>>
>> The problematic patch [1] solves a specific problem. Do we have other
>> ways to solve it?
>>
>> Regards,
>> Noorul
>>
>> [1] https://review.openstack.org/#/c/64226
>
> Why is test-requirements.txt getting installed in pre_test instead of
> post_test? Installing test-requirements prior to installing devstack
> itself in no way surprises me that it causes issues. You can see that
> command is litterally the first thing in the console -
> http://logs.openstack.org/66/62466/7/gate/gate-solum-devstack-dsvm/49bac35/console.html#_2014-01-08_13_46_15_161
>
> It should be installed right before tests get run, which I assume is L34
> of this file -
> https://review.openstack.org/#/c/64226/3/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml
>
> Given that is where ./run_tests.sh is run.
>

This might help, but run_tests.sh anyhow will import oslo.config. I
need to test this and see.

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Julien Danjou
On Wed, Jan 08 2014, Ildikó Váncsa wrote:

> My idea was just about providing the possibility to configure the data
> collection in Ceilometer differently for the different tenants, I didn't
> mean to link it to an API or at least not on the first place. It could be
> done by the operator as well, for instance, if the polling frequency should
> be different in case of tenants.

Yeah, that would work, we would just need to add a list of project to
the yaml file. We are already doing that for resources anyway, we can do
it for user and project as well.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi,

(You didn't Cc the list, not sure if it was on purpose. I'm not adding it back 
to not break any confidentiality, but feel free to do so.)

Sorry that was just a mistake.

> > The point is to configure the data collection configuration for the 
> > currently existing meters differently for tenants. It is not just 
> > about enabling or disabling of meters. It could be used to change the 
> > interval settings of meters, like tenantA would like to receive 
> > cpu_util samples in every 10 seconds and tenantB would like to receive 
> > cpu_util in every 1 minute, but network.incoming.bytes in every 20 
> > seconds. As for disabling meters, for instance tenantA needs 
> > disk.read.requests and disk.write.requests, while tenantB doesn't.
>
> Ok, so this is really about something the _operator_ wants to do, not users. 
> I still don't think it belongs to an API, at least not specific to Ceilometer.

My idea was just about providing the possibility to configure the data 
collection in Ceilometer differently for the different tenants, I didn't mean 
to link it to an API or at least not on the first place. It could be done by 
the operator as well, for instance, if the polling frequency should be 
different in case of tenants.

Best Regards,
Ildiko

> --
> Julien Danjou
> -- Free Software hacker - independent consultant
> -- http://julien.danjou.info
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Ben Nemec

On 2014-01-08 11:16, Sean Dague wrote:

On 01/08/2014 12:06 PM, Doug Hellmann wrote:


Yeah, that's what made me start thinking oslo.sphinx should be called
something else.

Sean, how strongly do you feel about not installing oslo.sphinx in
devstack? I see your point, I'm just looking for alternatives to the
hassle of renaming oslo.sphinx.


Doing the git thing is definitely not the right thing. But I guess I 
got

lost somewhere along the way about what the actual problem is. Can
someone write that up concisely? With all the things that have been
tried/failed, why certain things fail, etc.


The problem seems to be when we pip install -e oslo.config on the 
system, then pip install oslo.sphinx in a venv.  oslo.config is 
unavailable in the venv, apparently because the namespace package for 
o.s causes the egg-link for o.c to be ignored.  Pretty much every other 
combination I've tried (regular pip install of both, or pip install -e 
of both, regardless of where they are) works fine, but there seem to be 
other issues with all of the other options we've explored so far.


We can't remove the pip install -e of oslo.config because it has to be 
used for gating, and we can't pip install -e oslo.sphinx because it's 
not a runtime dep so it doesn't belong in the gate.  Changing the 
toplevel package for oslo.sphinx was also mentioned, but has obvious 
drawbacks too.


I think that about covers what I know so far.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi Doug,

Answers inline again.

Best Regards,
Ildiko

On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
mailto:ildiko.van...@ericsson.com>> wrote:
Hi,

I've started to work on the idea of supporting a kind of tenant/project based 
configuration for Ceilometer. Unfortunately I haven't reached the point of 
having a blueprint that could be registered until now. I do not have a deep 
knowledge about the collector and compute agent services, but this feature 
would require some deep changes for sure. Currently there are pipelines for 
data collection and transformation, where the counters can be specified, about 
which data should be collected and also the time interval for data collection 
and so on. These pipelines can be configured now globally in the pipeline.yaml 
file, which is stored right next to the Ceilometer configuration files.

Yes, the data collection was designed to be configured and controlled by the 
deployer, not the tenant. What benefits do we gain by giving that control to 
the tenant?

ildikov: Sorry, my explanation was not clear. I meant there the configuration 
of data collection for projects, what was mentioned by Tim Bell in a previous 
email. This would mean that the project administrator is able to create a data 
collection configuration for his/her own project, which will not affect the 
other project's configuration. The tenant would be able to specify meters 
(enabled/disable based on which ones are needed) for the given project also 
with project specific time intervals, etc.

OK, I think some of the confusion is terminology. Who is a "project 
administrator"? Is that someone with access to change ceilometer's 
configuration file directly? Someone with a particular role using the API? Or 
something else?

ildikov: As project administrator I meant a user with particular role, a user 
assigned to a tenant.




In my view, we could keep the dynamic meter configuration bp with considering 
to extend it to dynamic configuration of Ceilometer, not just the meters and we 
could have a separate bp for the project based configuration of meters.

Ceilometer uses oslo.config, just like all of the rest of OpenStack. How are 
the needs for dynamic configuration updates in ceilometer different from the 
other services?

ildikov: There are some parameters in the configuration file of Ceilometer, 
like log options and notification types, which would be good to be able to 
configure them dynamically. I just wanted to reflect to that need. As I see, 
there are two options here. The first one is to identify the group of the 
dynamically modifiable parameters and move them to the API level. The other 
option could be to make some modifications in oslo.config too, so other 
services also could use the benefits of dynamic configuration. For example the 
log settings could be a good candidate, as for example the change of log 
levels, without service restart, in case debugging the system can be a useful 
feature for all of the OpenStack services.

I "misspoke" earlier. If we're talking about meters, those are actually defined 
by the pipeline file (not oslo.config). So if we do want that file re-read 
automatically, we can implement that within ceilometer itself, though I'm still 
reluctant to say we want to provide API access for modifying those settings. 
That's *really* not something we've designed the rest of the system to 
accommodate, so I don't know what side-effects we might introduce.

ildikov: In case of oslo.config, I meant the ceilometer.conf file in my answer 
above, not pipeline.yaml. As for the API part, I do not know the consequences 
of that implementation either, so now I'm kind of waiting for the outcome of 
this Dynamic Meters bp.

As far as the other configuration settings, we had the conversation about 
updating those through some sort of API early on, and decided that there are 
already lots of operational tools out there to manage changes to those files. I 
would need to see a list of which options people would want to have changed 
through an API to comment further.

ildikov: Yes, I agree that not all the parameters should be configured 
dynamically. It just popped into my mind regarding the dynamic configuration, 
that there would be a need to configure other configuration parameters, not 
just meters, that is why I mentioned it as a considerable item.

Doug



Doug



If it is ok for you, I will register the bp for this per-project tenant 
settings with some details, when I'm finished with the initial design of how 
this feature could work.

Best Regards,
Ildiko

-Original Message-
From: Neal, Phil [mailto:phil.n...@hp.com]
Sent: Tuesday, January 07, 2014 11:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

For multi-node deployments, implementing something like inotify would allow 
administrators to push configuration changes out to multiple targets usi

Re: [openstack-dev] [nova][neutron][ipv6]Hairpinning in libvirt, once more

2014-01-08 Thread Vishvananda Ishaya
The logic makes sense to me here. I’m including Evan Callicoat in this response 
in case he has any comments on the points you make below.

Vish
 
On Jan 7, 2014, at 4:57 AM, Ian Wells  wrote:

> See Sean Collins' review https://review.openstack.org/#/c/56381 which 
> disables hairpinning when Neutron is in use.  tl;dr - please upvote the 
> review.  Long form reasoning follows...
> 
> There's a solid logical reason for enabling hairpinning, but it only applies 
> to nova-network.  Hairpinning is used in nova-network so that packets from a 
> machine and destined for that same machine's floating IP address are returned 
> to it.  They then pass through the rewrite rules (within the libvirt filters 
> on the instance's tap interface) that do the static NAT mapping to translate 
> floating IP to fixed IP.
> 
> Whoever implemented this assumed that hairpinning in other situations is 
> harmless.  However, this same feature also prevents IPv6 from working - 
> returned neighbor discovery packets panic VMs into thinking they're using a 
> duplicate address on the network.  So we'd like to turn it off.  Accepting 
> that nova-network will change behaviour comprehensively if we just remove the 
> code, we've elected to turn it off only when Neutron is being used and leave 
> nova-network broken for ipv6.
> 
> Obviously, this presents an issue, because we're changing the way that 
> Openstack behaves in a user-visible way - hairpinning may not be necessary or 
> desirable for Neutron, but it's still detectable when it's on or off if you 
> try hard enough - so the review comments to date have been conservatively 
> suggesting that we avoid the functional change as much as possible, and 
> there's a downvote to that end.  But having done more investigation I don't 
> think there's sufficient justification to keep the status quo.
> 
> We've also talked about leaving hairpinning off if and only if the Neutron 
> plugin explicitly says that it doesn't want to use hairpinning.  We can 
> certainly do this, and I've looked into it, but in practice it's not worth 
> the code and interface changes: 
> 
>  - Neutron (not 'some drivers' - this is consistent across all of them) does 
> NAT rewriting in the routers now, not on the ports, so hairpinning doesn't 
> serve its intended purpose; what it actually does is waste CPU and bandwidth 
> by receives a packet every time it sends an outgoing packet and precious 
> little else.  The instance doesn't expect these packets, it always ignores 
> these packets, but it receives them anyway.  It's a pointless no-op, though 
> there exists the theoretical possibility that someone is relying on it for 
> their application.
> - it's *only* libvirt that ever turns hairpinning on in the first place - 
> none of the other drivers do it
> - libvirt only turns it on sometimes - for hybrid VIFs it's enabled, if 
> generic VIFs are configured and linuxbridge is in use it's enabled, but for 
> generic VIFs and OVS is in use then the enable function fails silently (and, 
> indeed, has been designed to fail silently, it seems).
> 
> Given these details, there seems little point in making the code more complex 
> to support a feature that isn't universal and isn't needed; better that we 
> just disable it for Neutron and be done.  So (and test failures aside) could 
> I ask that the core devs check and approve the patch review?
> -- 
> Ian.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Sean Dague
On 01/08/2014 11:40 AM, Noorul Islam Kamal Malmiyoda wrote:
> 
> On Jan 8, 2014 9:58 PM, "Georgy Okrokvertskhov"
> mailto:gokrokvertsk...@mirantis.com>> wrote:
>>
>> Hi,
>>
>> I do understand why there is a push back for this patch. This patch is
> for infrastructure project which works for multiple projects. Infra
> maintainers should not know specifics of each project in details. If
> this patch is a temporary solution then who will be responsible to
> remove it? 
>>
> 
> I am not sure who is responsible for solum related configurations in
> infra project. I see that almost all the infra config for solum project
> is done by solum members. So I think any solum member can submit a patch
> to revert this once we have a permanent solution.
> 
>> If we need start this gate I propose to revert all patches which led
> to this inconsistent state and apply workaround in Solum repository
> which is under Solum team full control and review. We need to open a bug
> in Solum project to track this.
>>
> 
> The problematic patch [1] solves a specific problem. Do we have other
> ways to solve it?
> 
> Regards,
> Noorul
> 
> [1] https://review.openstack.org/#/c/64226

Why is test-requirements.txt getting installed in pre_test instead of
post_test? Installing test-requirements prior to installing devstack
itself in no way surprises me that it causes issues. You can see that
command is litterally the first thing in the console -
http://logs.openstack.org/66/62466/7/gate/gate-solum-devstack-dsvm/49bac35/console.html#_2014-01-08_13_46_15_161

It should be installed right before tests get run, which I assume is L34
of this file -
https://review.openstack.org/#/c/64226/3/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml

Given that is where ./run_tests.sh is run.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-08 Thread Vishvananda Ishaya

On Jan 6, 2014, at 3:50 PM, Jon Bernard  wrote:

> Hello all,
> 
> I would like to propose instance-level snapshots as a feature for
> inclusion in Nova.  An initial draft of the more official proposal is
> here [1], blueprint is here [2].
> 
> In a nutshell, this feature will take the existing create-image
> functionality a few steps further by providing the ability to take
> a snapshot of a running instance that includes all of its attached
> volumes.  A coordinated snapshot of multiple volumes for backup
> purposes.  The snapshot operation should occur while the instance is in
> a paused and quiesced state so that each volume snapshot is both
> consistent within itself and with respect to its sibling snapshots.
> 
> I still have some open questions on a few topics:
> 
> * API changes, two different approaches come to mind:
> 
>  1. Nova already has a command `createImage` for creating an image of an
> existing instance.  This command could be extended to take an
> additional parameter `all-volumes` that signals the underlying code
> to capture all attached volumes in addition to the root volume.  The
> semantic here is important, `createImage` is used to create
> a template image stored in Glance for later reuse.  If the primary
> intent of this new feature is for backup only, then it may not be
> wise to overlap the two operations in this way.  On the other hand,
> this approach would introduce the least amount of change to the
> existing API, requiring only modification of an existing command
> instead of the addition of an entirely new one.
> 
>  2. If the feature's primary use is for backup purposes, then a new API
> call may be a better approach, and leave `createImage` untouched.
> This new call could be called `createBackup` and take as a parameter
> the name of the instance.  Although it introduces a new member to the
> API reference, it would allow this feature to evolve without
> introducing regressions in any existing calls.  These two calls could
> share code at some point in the future.

You’ve mentioned “If the feature’s use case is backup” a couple of times
without specifying the answer. I think this is important to the above
question. Also relevant is how the snapshot is stored and potentially
restored.

As you’ve defined the feature so far, it seems like most of it could
be implemented client side:

* pause the instance
* snapshot the instance
* snapshot any attached volumes

The only thing missing in this scenario is snapshotting any ephemeral
drives. There are workarounds for this such as:
 * use flavor with no ephemeral storage
 * boot from volume

It is also worth mentioning that snapshotting a boot from volume instance
will actually do most of this for you (everything but pausing the instance)
and additionally give you an image which when booted will lead to a clone
of all of the snapshotted volumes.

So unless there is some additional feature regarding storing or restoring
the backup, I only see one potential area for improvement inside of nova:
Modifying the snapshot command to allow for snapshotting of ephemeral
drives.

If this is an important feature, rather than an all in one command, I
suggest an extension to createImage which would allow you to specify the
drive you wish to snapshot. If you could specify drive: vdb in the snapshot
command it would allow you to snapshot all the components individually.

Vish

> 
> * Existing libvirt support:
> 
>To initially support consistent-across-multiple-volumes snapshots,
>we must be able to ask libvirt for a snapshot of an already paused
>guest.  I don't believe such a call is currently supported, so
>changes to libvirt may be a prerequisite for this feature.
> 
> Any contribution, comments, and pieces of advice are much appreciated.
> 
> [1]: https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots
> [2]: https://blueprints.launchpad.net/nova/+spec/instance-level-snapshots
> 
> -- 
> Jon
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Fox, Kevin M
Let me give you a more concrete example, since you still think one size fits 
all here.

I am using OpenStack on my home server now. In the past, I had one machine with 
lots of services on it. At times, I would update one service and during the 
update process, a different service would break.

Last round of hardware purchasing got me an 8 core desktop processor with 16 
gigs of ram. Enough to give every service I have its own processor and 2 gigs 
of ram. So, I decided to run OpenStack on the server to manage the service vm's.

The base server  shares out my shared data with nfs, the vm's then re-export it 
in various ways like samba, dlna to my ps3, etc.

Now, I could create a golden image for each service type with everything all 
setup and good to go. And infrastructure to constantly build updated ones.

But in this case, grabbing Fedora cloud image or Ubuntu cloud image, and 
starting up the service with heat and a couple of line cloud init telling it to 
install just the package for the one service I need saves a ton of effort and 
space. The complexity is totally on the distro folks and not me. Very simple to 
maintain.

I can get almost the stability of the golden image simply by pausing the 
working service vm, spawning a new one, and only if its sane, switch to it and 
delete the old. In fact, Heat is working towards (if not already done) having 
Heat itself do this process for you.

I'm all for golden images as a tool. We use them a lot. Like all tools though, 
there isn't one "works for all cases best" tool.

I hope this use case helps.

Thanks,
Kevin


From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, January 08, 2014 8:36 AM
To: openstack-dev
Subject: Re: [openstack-dev] [TripleO] Installing from packages in  
tripleo-image-elements

Excerpts from Derek Higgins's message of 2014-01-08 02:11:09 -0800:
> On 08/01/14 05:07, Clint Byrum wrote:
> > Excerpts from Fox, Kevin M's message of 2014-01-07 16:27:35 -0800:
> >> Another piece to the conversation I think is update philosophy. If
> >> you are always going to require a new image and no customization after
> >> build ever, ever, the messiness that source usually cause in the file
> >> system image really doesn't matter. The package system allows you to
> >> easily update, add, and remove packages bits at runtime cleanly. In
> >> our experimenting with OpenStack, its becoming hard to determine
> >> which philosophy is better. Golden Images for some things make a lot
> >> of sense. For other random services, the maintenance of the Golden
> >> Image seems to be too much to bother with and just installing a few
> >> packages after image start is preferable. I think both approaches are
> >> valuable. This may not directly relate to what is best for Triple-O
> >> elements, but since we are talking philosophy anyway...
> >>
> >
> > The golden image approach should be identical to the package approach if
> > you are doing any kind of testing work-flow.
> >
> > "Just install a few packages" is how you end up with, as Robert said,
> > "snowflakes". The approach we're taking with diskimage-builder should
> > result in that image building extremely rapidly, even if you compiled
> > those things from source.
>
> This is the part of your argument I don't understand, creating images
> with packages is no more likely to result in snowflakes then creating
> images from sources in git.
>
> You would build an image using packages and at the end of the build
> process you can lock the package versions. Regardless of how the image
> is built you can consider it a golden image. This image is then deployed
> to your hosts and not changed.
>
> We would still be using diskimage-builder the main difference to the
> whole process is we would end up with a image that has more packages
> installed and no virtual envs.
>

I'm not saying building images from packages will encourage
snowflakes. I'm saying installing and updating on systems using packages
encourages snowflakes. Kevin was suggesting that the image workflow
wouldn't fit for everything, and thus was opening up the "just install
a few packages on a system" can of worms. I'm saying to Kevin, don't
do that, just make your image work-flow tighter, and suggesting it is
worth it to do that to avoid having snowflakes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Sean Dague
On 01/08/2014 12:06 PM, Doug Hellmann wrote:

> Yeah, that's what made me start thinking oslo.sphinx should be called
> something else. 
> 
> Sean, how strongly do you feel about not installing oslo.sphinx in
> devstack? I see your point, I'm just looking for alternatives to the
> hassle of renaming oslo.sphinx.

Doing the git thing is definitely not the right thing. But I guess I got
lost somewhere along the way about what the actual problem is. Can
someone write that up concisely? With all the things that have been
tried/failed, why certain things fail, etc.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from Jan Provaznik's message of 2014-01-08 03:00:19 -0800:
> On 01/07/2014 09:01 PM, James Slagle wrote:
> > Hi,
> >
> > I'd like to discuss some possible ways we could install the OpenStack
> > components from packages in tripleo-image-elements.  As most folks are
> > probably aware, there is a "fork" of tripleo-image-elements called
> > tripleo-puppet-elements which does install using packages, but it does
> > so using Puppet to do the installation and for managing the
> > configuration of the installed components.  I'd like to kind of set
> > that aside for a moment and just discuss how we might support
> > installing from packages using tripleo-image-elements directly and not
> > using Puppet.
> >
> > One idea would be to add support for a new type (or likely 2 new
> > types: rpm and dpkg) to the source-repositories element.
> > source-repositories already knows about the git, tar, and file types,
> > so it seems somewhat natural to have additional types for rpm and
> > dpkg.
> >
> > A complication with that approach is that the existing elements assume
> > they're setting up everything from source.  So, if we take a look at
> > the nova element, and specifically install.d/74-nova, that script does
> > stuff like install a nova service, adds a nova user, creates needed
> > directories, etc.  All of that wouldn't need to be done if we were
> > installing from rpm or dpkg, b/c presumably the package would take
> > care of all that.
> >
> > We could fix that by making the install.d scripts only run if you're
> > installing a component from source.  In that sense, it might make
> > sense to add a new hook, source-install.d and only run those scripts
> > if the type is a source type in the source-repositories configuration.
> >   We could then have a package-install.d to handle the installation
> > from the packages type.   The install.d hook could still exist to do
> > things that might be common to the 2 methods.
> >
> > Thoughts on that approach or other ideas?
> >
> > I'm currently working on a patchset I can submit to help prove it out.
> >   But, I'd like to start discussion on the approach now to see if there
> > are other ideas or major opposition to that approach.
> >
> 
> Hi James,
> I think it would be really nice to be able install openstack+deps from 
> packages and many users (and cloud providers) would appreciate it.
> 
> Among other things, with packages provided by a distro you get more 
> stability in compare to installing openstack from git repos and fetching 
> newest possible dependencies from pypi.
> 
> In a real deployment setup  I don't want to use "more-than-necessary" 
> newer packages/dependencies when building images - taking an example 
> from last days I wouldn't have to bother with newer pip package which 
> breaks image building.
> 

Right, from this perspective, you want to run OpenStack stable releases.
That should be fairly simple now by building images using the appropriate
environment variables.

However, we don't test that so it is likely to break as Icehouse diverges
from Havana. So I think in addition to package-enabling, those who want
to see TripleO work for stable releases should probably start looking at
creating stable branches of t-i-e and t-h-t to build images and templates
from starting at the icehouse time-frame.

So given that I'd suggest that packages take a back seat to making
TripleO part of the integrated release of OpenStack. Otherwise we'll
just have stable releases for the distros who have packages that work
with TripleO instead of for all distros.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Rick Jones

On 01/07/2014 06:30 PM, Ray Sun wrote:

Stackers,
I tried to create a new VM using the driver VMwareVCDriver, but I found
it's very slow when I try to create a new VM, for example, 7GB Windows
Image spent 3 hours.

Then I tried to use curl to upload a iso to vcenter directly.

curl -H "Expect:" -v --insecure --upload-file
windows2012_server_cn_x64.iso
"https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenter&dsName=datastore2";

The average speed is 0.8 MB/s.

Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

I am not sure if there any special configurations for web interface for
vcenter. Please help.


I'm not fully versed in the plumbing, but while you are pushing via curl 
to 200.21.0.99 you might check the netstat statistics at the sending 
side, say once a minute, and see what the TCP retransmission rate 
happens to be.  If 200.21.0.99 has to push the bits to somewhere else 
you should follow that trail back to the point of origin, checking 
statistics on each node as you go.


You could, additionally, try running the likes of netperf (or iperf, but 
I have a natural inclination to suggest netperf...) between the same 
pairs of systems.  If netperf gets significantly better performance then 
you (probably) have an issue at the application layer rather than in the 
networking.


Depending on how things go with those, it may be desirable to get a 
packet trace of the upload via the likes of tcpdump.  It will be very 
much desirable to start the packet trace before the upload so you can 
capture the TCP connection establishment packets (aka the TCP 
SYNchronize segments) as those contain some important pieces of 
information about the capabilities of the connection.


rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Kurt Griffiths
Yeah, that could work. The main thing is to try and keep policy control in one 
place if you can rather than sprinkling it all over the place.

From: Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, January 8, 2014 at 10:41 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy

Hi Kurt,

As for WSGI middleware I think about Pecan hooks which can be added before 
actual controller call. Here is an example how we added a hook for keystone 
information collection: 
https://review.openstack.org/#/c/64458/4/solum/api/auth.py

What do you think, will this approach with Pecan hooks work?

Thanks
Georgy


On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>> wrote:
You might also consider doing this in WSGI middleware:

Pros:

  *   Consolidates policy code in once place, making it easier to audit and 
maintain
  *   Simple to turn policy on/off – just don’t insert the middleware when off!
  *   Does not preclude the use of oslo.policy for rule checking
  *   Blocks unauthorized requests before they have a chance to touch the web 
framework or app. This reduces your attack surface and can improve performance  
 (since the web framework has yet to parse the request).

Cons:

  *   Doesn't work for policies that require knowledge that isn’t available 
this early in the pipeline (without having to duplicate a lot of code)
  *   You have to parse the WSGI environ dict yourself (this may not be a big 
deal, depending on how much knowledge you need to glean in order to enforce the 
policy).
  *   You have to keep your HTTP path matching in sync with with your route 
definitions in the code. If you have full test coverage, you will know when you 
get out of sync. That being said, API routes tend to be quite stable in 
relation to to other parts of the code implementation once you have settled on 
your API spec.

I’m sure there are other pros and cons I missed, but you can make your own best 
judgement whether this option makes sense in Solum’s case.

From: Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, January 7, 2014 at 6:54 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy




On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>> wrote:
Hi Dough,

Thank you for pointing to this code. As I see you use OpenStack policy 
framework but not Pecan security features. How do you implement fine grain 
access control like user allowed to read only, writers and admins. Can you 
block part of API methods for specific user like access to create methods for 
specific user role?

The policy enforcement isn't simple on/off switching in ceilometer, so we're 
using the policy framework calls in a couple of places within our API code 
(look through v2.py for examples). As a result, we didn't need to build much on 
top of the existing policy module to interface with pecan.

For your needs, it shouldn't be difficult to create a couple of decorators to 
combine with pecan's hook framework to enforce the policy, which might be less 
complex than trying to match the operating model of the policy system to 
pecan's security framework.

This is the sort of thing that should probably go through Oslo and be shared, 
so please consider contributing to the incubator when you have something 
working.

Doug



Thanks
Georgy


On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>> wrote:



On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>> wrote:
Hi,

In Solum project we will need to implement security and ACL for Solum API. 
Currently we use Pecan framework for API. Pecan has its own security model 
based on SecureController class. At the same time OpenStack widely uses policy 
mechanism which uses json files to control access to specific API methods.

I wonder if someone has any experience with implementing security and ACL stuff 
with using Pecan framework. What is the right way to provide security for API?

In ceilometer we are using the keystone middleware and the policy framework to 
manage arguments that constrain the queries handled by the storage layer.

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py

and

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337

Doug



Thanks
Georgy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___

Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 11:53 AM, Ben Nemec  wrote:

>  On 2014-01-08 10:50, Doug Hellmann wrote:
>
>
>
>
> On Wed, Jan 8, 2014 at 11:31 AM, Ben Nemec  wrote:
>
>>   On 2014-01-08 08:24, Doug Hellmann wrote:
>>
>>
>>
>>
>> On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec wrote:
>>
>>>  On 2014-01-07 07:16, Doug Hellmann wrote:
>>>
>>>
>>>
>>>
>>> On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin wrote:
>>>
  I have been seeing this problem also.

 My problem is actually with oslo.sphinx. I ran sudo pip install -r
 test-requirements.txt in cinder so that I could run the tests there, which
 installed oslo.sphinx.

 Strange thing is that the oslo.sphinx installed a directory called oslo
 in /usr/local/lib/python2.7/dist-packages with no __init__.py file. With
 this package installed like so I get the same error you get with
 oslo.config.

>>>
>>>  The oslo libraries use python namespace packages, which manifest
>>> themselves as a directory in site-packages (or dist-packages) with
>>> sub-packages but no __init__.py(c). That way oslo.sphinx and oslo.config
>>> can be packaged separately, but still installed under the "oslo" directory
>>> and imported as oslo.sphinx and oslo.config.
>>>
>>> My guess is that installing oslo.sphinx globally (with sudo), set up 2
>>> copies of the namespace package (one in the global dist-packages and
>>> presumably one in the virtualenv being used for the tests).
>>>
>>>Actually I think it may be the opposite problem, at least where I'm
>>> currently running into this.  oslo.sphinx is only installed in the venv and
>>> it creates a namespace package there.  Then if you try to load oslo.config
>>> in the venv it looks in the namespace package, doesn't find it, and bails
>>> with a missing module error.
>>>
>>> I'm personally running into this in tempest - I can't even run pep8 out
>>> of the box because the sample config check fails due to missing
>>> oslo.config.  Here's what I'm seeing:
>>>
>>> In the tox venv:
>>> (pep8)[fedora@devstack site-packages]$ ls oslo*
>>> oslo.sphinx-1.1-py2.7-nspkg.pth
>>>
>>> oslo:
>>> sphinx
>>>
>>> oslo.sphinx-1.1-py2.7.egg-info:
>>> dependency_links.txt  namespace_packages.txt  PKG-INFO top_level.txt
>>> installed-files.txt   not-zip-safeSOURCES.txt
>>>
>>>
>>> And in the system site-packages:
>>> [fedora@devstack site-packages]$ ls oslo*
>>> oslo.config.egg-link  oslo.messaging.egg-link
>>>
>>>
>>> Since I don't actually care about oslo.sphinx in this case, I also found
>>> that deleting it from the venv fixes the problem, but obviously that's just
>>> a hacky workaround.  My initial thought is to install oslo.sphinx in
>>> devstack the same way as oslo.config and oslo.messaging, but I assume
>>> there's a reason we didn't do it that way in the first place so I'm not
>>> sure if that will work.
>>>
>>> So I don't know what the proper fix is, but I thought I'd share what
>>> I've found so far.  Also, I'm not sure if this even relates to the
>>> ceilometer issue since I wouldn't expect that to be running in a venv, but
>>> it may have a similar issue.
>>>
>>
>>  I wonder if the issue is actually that we're using "pip install -e" for
>> oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
>> things work properly if those packages are installed to the global
>> site-packages from PyPI instead? We don't want to change the way devstack
>> installs them, but it would give us another data point.
>>
>> Another solution is to have a list of dependencies needed for building
>> documentation, separate from the tests, since oslo.sphinx isn't needed for
>> the tests.
>>
>>
>>
>> It does work if I remove the pip install -e version of oslo.config and
>> reinstall from the pypi package, so this appears to be an issue with the
>> egg-links.
>>
>
>  You had already tested installing oslo.sphinx with pip install -e,
> right? That's probably the least-wrong answer. Either that or move
> oslo.sphinx to a different top level package to avoid conflicting with
> runtime code.
>
>
> Right.  This https://review.openstack.org/#/c/65336/ also fixed the
> problem for me, but according to Sean that's not something we should be
> doing in devstack either.
>

Yeah, that's what made me start thinking oslo.sphinx should be called
something else.

Sean, how strongly do you feel about not installing oslo.sphinx in
devstack? I see your point, I'm just looking for alternatives to the hassle
of renaming oslo.sphinx.

Doug



> -Ben
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Ben Nemec
 

On 2014-01-08 10:50, Doug Hellmann wrote: 

> On Wed, Jan 8, 2014 at 11:31 AM, Ben Nemec  wrote:
> 
> On 2014-01-08 08:24, Doug Hellmann wrote: 
> 
> On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec  wrote:
> 
> On 2014-01-07 07:16, Doug Hellmann wrote: 
> 
> On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin  wrote:
> 
> I have been seeing this problem also. 
> 
> My problem is actually with oslo.sphinx. I ran sudo pip install -r 
> test-requirements.txt in cinder so that I could run the tests there, which 
> installed oslo.sphinx. 
> 
> Strange thing is that the oslo.sphinx installed a directory called oslo in 
> /usr/local/lib/python2.7/dist-packages with no __init__.py file. With this 
> package installed like so I get the same error you get with oslo.config. 
> 
> The oslo libraries use python namespace packages, which manifest themselves 
> as a directory in site-packages (or dist-packages) with sub-packages but no 
> __init__.py(c). That way oslo.sphinx and oslo.config can be packaged 
> separately, but still installed under the "oslo" directory and imported as 
> oslo.sphinx and oslo.config. 
> 
> My guess is that installing oslo.sphinx globally (with sudo), set up 2 copies 
> of the namespace package (one in the global dist-packages and presumably one 
> in the virtualenv being used for the tests).

Actually I think it may be the opposite problem, at least where I'm
currently running into this. oslo.sphinx is only installed in the venv
and it creates a namespace package there. Then if you try to load
oslo.config in the venv it looks in the namespace package, doesn't find
it, and bails with a missing module error. 

I'm personally running into this in tempest - I can't even run pep8 out
of the box because the sample config check fails due to missing
oslo.config. Here's what I'm seeing: 

In the tox venv: 
(pep8)[fedora@devstack site-packages]$ ls oslo*
oslo.sphinx-1.1-py2.7-nspkg.pth

oslo:
sphinx

oslo.sphinx-1.1-py2.7.egg-info:
dependency_links.txt namespace_packages.txt PKG-INFO top_level.txt
 installed-files.txt not-zip-safe SOURCES.txt 

And in the system site-packages: 
[fedora@devstack site-packages]$ ls oslo*
oslo.config.egg-link oslo.messaging.egg-link 

Since I don't actually care about oslo.sphinx in this case, I also found
that deleting it from the venv fixes the problem, but obviously that's
just a hacky workaround. My initial thought is to install oslo.sphinx in
devstack the same way as oslo.config and oslo.messaging, but I assume
there's a reason we didn't do it that way in the first place so I'm not
sure if that will work. 

So I don't know what the proper fix is, but I thought I'd share what
I've found so far. Also, I'm not sure if this even relates to the
ceilometer issue since I wouldn't expect that to be running in a venv,
but it may have a similar issue. 

I wonder if the issue is actually that we're using "pip install -e" for
oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
things work properly if those packages are installed to the global
site-packages from PyPI instead? We don't want to change the way
devstack installs them, but it would give us another data point. 

Another solution is to have a list of dependencies needed for building
documentation, separate from the tests, since oslo.sphinx isn't needed
for the tests. 

It does work if I remove the pip install -e version of oslo.config and
reinstall from the pypi package, so this appears to be an issue with the
egg-links. 

You had already tested installing oslo.sphinx with pip install -e,
right? That's probably the least-wrong answer. Either that or move
oslo.sphinx to a different top level package to avoid conflicting with
runtime code. 

Right. This https://review.openstack.org/#/c/65336/ also fixed the
problem for me, but according to Sean that's not something we should be
doing in devstack either. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from Jay Dobies's message of 2014-01-08 08:09:51 -0800:
> There were so many places in this thread that I wanted to jump in on as 
> I caught up, it makes sense to just summarize things in once place 
> instead of a half dozen quoted replies.
> 
> I agree with the sentiments about flexibility. Regardless of my personal 
> preference on source v. packages, it's been my experience that the 
> general mindset of production deployment is that new ideas move slowly. 
> Admins are set in their ways and policies are in place on how things are 
> consumed.
> 
> Maybe the newness of all things cloud-related and image-based management 
> for scale is a good time to shift the mentality out of packages (again, 
> I'm not suggesting whether or not it should be shifted). But I worry 
> about adoption if we don't provide an option for people to use blessed 
> distro packages, either because of company policy or years of habit and 
> bias. If done correctly, there's no difference between a package and a 
> particular tag in a source repository, but there is a psychological 
> component there that I think we need to account for, assuming someone is 
> willing to bite off the implementation costs (which is sounds like there 
> is).
> 

Thanks for your thoughts Jay. I agree, what we're doing is kind of weird
sounding. Not everybody will be on-board with their OpenStack cloud being
wildly different from their existing systems. We definitely need to do
work to make it easy for them to get on the new train of thinking one
step at a time. Just having an OpenStack cloud will do a lot for any
org that has none.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-01-08 07:03:39 -0800:
> On Tue, Jan 7, 2014 at 11:20 PM, Robert Collins
>  wrote:
> > On 8 January 2014 12:18, James Slagle  wrote:
> >> Sure, the crux of the problem was likely that versions in the distro
> >> were too old and they needed to be updated.  But unless we take on
> >> building the whole OS from source/git/whatever every time, we're
> >> always going to have that issue.  So, an additional benefit of
> >> packages is that you can install a known good version of an OpenStack
> >> component that is known to work with the versions of dependent
> >> software you already have installed.
> >
> > The problem is that OpenStack is building against newer stuff than is
> > in distros, so folk building on a packaging toolchain are going to
> > often be in catchup mode - I think we need to anticipate package based
> > environments running against releases rather than CD.
> 
> I just don't see anyone not building on a packaging toolchain, given
> that we're all running the distro of our choice and pip/virtualenv/etc
> are installed from distro packages.  Trying to isolate the building of
> components with pip installed virtualenvs was still a problem.  Short
> of uninstalling the build tools packages from the cloud image and then
> wget'ing the pip tarball, I don't think there would have been a good
> way around this particular problem.  Which, that approach may
> certainly make some sense for a CD scenario.
> 

I will definitely concede that we find problems at a high rate during
image builds, and that we would not if we just waited for others to solve
those problems. However, when we do solve those problems, we solve them
for everyone downstream from us. That is one reason it is so desirable
to keep our work in TripleO as far upstream as possible. Package work is
inherently downstream.

Also it is worth noting that problems at image build time are much simpler
to handle, because they happen on a single machine generally. That is
one reason I down play those issues. For anyone not interested in running
CD, we have the release process to handle such problems and they should
_never_ see any of these issues, whether running from packages or on
stable branches in the git repos.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 11:31 AM, Ben Nemec  wrote:

>  On 2014-01-08 08:24, Doug Hellmann wrote:
>
>
>
>
> On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec  wrote:
>
>>  On 2014-01-07 07:16, Doug Hellmann wrote:
>>
>>
>>
>>
>> On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin wrote:
>>
>>>  I have been seeing this problem also.
>>>
>>> My problem is actually with oslo.sphinx. I ran sudo pip install -r
>>> test-requirements.txt in cinder so that I could run the tests there, which
>>> installed oslo.sphinx.
>>>
>>> Strange thing is that the oslo.sphinx installed a directory called oslo
>>> in /usr/local/lib/python2.7/dist-packages with no __init__.py file. With
>>> this package installed like so I get the same error you get with
>>> oslo.config.
>>>
>>
>>  The oslo libraries use python namespace packages, which manifest
>> themselves as a directory in site-packages (or dist-packages) with
>> sub-packages but no __init__.py(c). That way oslo.sphinx and oslo.config
>> can be packaged separately, but still installed under the "oslo" directory
>> and imported as oslo.sphinx and oslo.config.
>>
>> My guess is that installing oslo.sphinx globally (with sudo), set up 2
>> copies of the namespace package (one in the global dist-packages and
>> presumably one in the virtualenv being used for the tests).
>>
>>Actually I think it may be the opposite problem, at least where I'm
>> currently running into this.  oslo.sphinx is only installed in the venv and
>> it creates a namespace package there.  Then if you try to load oslo.config
>> in the venv it looks in the namespace package, doesn't find it, and bails
>> with a missing module error.
>>
>> I'm personally running into this in tempest - I can't even run pep8 out
>> of the box because the sample config check fails due to missing
>> oslo.config.  Here's what I'm seeing:
>>
>> In the tox venv:
>> (pep8)[fedora@devstack site-packages]$ ls oslo*
>> oslo.sphinx-1.1-py2.7-nspkg.pth
>>
>> oslo:
>> sphinx
>>
>> oslo.sphinx-1.1-py2.7.egg-info:
>> dependency_links.txt  namespace_packages.txt  PKG-INFO top_level.txt
>> installed-files.txt   not-zip-safeSOURCES.txt
>>
>>
>> And in the system site-packages:
>> [fedora@devstack site-packages]$ ls oslo*
>> oslo.config.egg-link  oslo.messaging.egg-link
>>
>>
>> Since I don't actually care about oslo.sphinx in this case, I also found
>> that deleting it from the venv fixes the problem, but obviously that's just
>> a hacky workaround.  My initial thought is to install oslo.sphinx in
>> devstack the same way as oslo.config and oslo.messaging, but I assume
>> there's a reason we didn't do it that way in the first place so I'm not
>> sure if that will work.
>>
>> So I don't know what the proper fix is, but I thought I'd share what I've
>> found so far.  Also, I'm not sure if this even relates to the ceilometer
>> issue since I wouldn't expect that to be running in a venv, but it may have
>> a similar issue.
>>
>
>  I wonder if the issue is actually that we're using "pip install -e" for
> oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
> things work properly if those packages are installed to the global
> site-packages from PyPI instead? We don't want to change the way devstack
> installs them, but it would give us another data point.
>
> Another solution is to have a list of dependencies needed for building
> documentation, separate from the tests, since oslo.sphinx isn't needed for
> the tests.
>
>
>
> It does work if I remove the pip install -e version of oslo.config and
> reinstall from the pypi package, so this appears to be an issue with the
> egg-links.
>

You had already tested installing oslo.sphinx with pip install -e, right?
That's probably the least-wrong answer. Either that or move oslo.sphinx to
a different top level package to avoid conflicting with runtime code.

Doug



> -Ben
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 10:19 AM, Doug Hellmann
wrote:

>
>
>
> On Wed, Jan 8, 2014 at 9:34 AM, Sergey Skripnick 
> wrote:
>
>>
>>
>>
 I'd like to explore whether the paramiko team will accept this code (or
 something like it). This seems like a perfect opportunity for us to
 contribute
 upstream.

>>>
>>> +1
>>>
>>> The patch is not big and the code seems simple and reasonable enough
>>> to live within paramiko.
>>>
>>> Cheers,
>>> FF
>>>
>>>
>>>
>> I sent a pull request [0] but there is two things:
>>
>>  nobody know when (and if) it will be merged
>>  it is still a bit low-level, unlike a patch in oslo
>>
>
> Let's give the paramkio devs a little time to review it.
>

I had a brief conversation with Jeff Forcier, and he likes the idea of
having some version of run() in paramiko. He will comment on the pull
request with some details about what his plans were, but I think we can
count on this going into a version of paramiko -- especially if we help.

Doug



>
>
>>
>> About spur: spur is looks ok, but it a bit complicated inside (it uses
>> separate threads for non-blocking stdin/stderr reading [1]) and I don't
>> know how it would work with eventlet.
>>
>
> That does sound like it might cause issues. What would we need to do to
> test it?
>
> Doug
>
>
>
>>
>> [0] https://github.com/paramiko/paramiko/pull/245
>> [1] https://github.com/mwilliamson/spur.py/blob/master/spur/io.py#L22
>>
>>
>> --
>> Regards,
>> Sergey Skripnick
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 10:43 AM, Eric Windisch  wrote:

>
>>
>>> About spur: spur is looks ok, but it a bit complicated inside (it uses
>>> separate threads for non-blocking stdin/stderr reading [1]) and I don't
>>> know how it would work with eventlet.
>>>
>>
>> That does sound like it might cause issues. What would we need to do to
>> test it?
>>
>
> Looking at the code, I don't expect it to be an issue. The monkey-patching
> will cause eventlet.spawn to be called for threading.Thread. The code looks
> eventlet-friendly enough on the surface. Error handing around file
> read/write could be affected, but it also looks fine.
>

Thanks for that analysis Eric.

Is there any reason for us to prefer one approach over the other, then?

Doug



>
>
> --
> Regards,
> Eric Windisch
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Ben Nemec
 

On 2014-01-08 08:24, Doug Hellmann wrote: 

> On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec  wrote:
> 
> On 2014-01-07 07:16, Doug Hellmann wrote: 
> 
> On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin  wrote:
> 
> I have been seeing this problem also. 
> 
> My problem is actually with oslo.sphinx. I ran sudo pip install -r 
> test-requirements.txt in cinder so that I could run the tests there, which 
> installed oslo.sphinx. 
> 
> Strange thing is that the oslo.sphinx installed a directory called oslo in 
> /usr/local/lib/python2.7/dist-packages with no __init__.py file. With this 
> package installed like so I get the same error you get with oslo.config. 
> 
> The oslo libraries use python namespace packages, which manifest themselves 
> as a directory in site-packages (or dist-packages) with sub-packages but no 
> __init__.py(c). That way oslo.sphinx and oslo.config can be packaged 
> separately, but still installed under the "oslo" directory and imported as 
> oslo.sphinx and oslo.config. 
> 
> My guess is that installing oslo.sphinx globally (with sudo), set up 2 copies 
> of the namespace package (one in the global dist-packages and presumably one 
> in the virtualenv being used for the tests).

Actually I think it may be the opposite problem, at least where I'm
currently running into this. oslo.sphinx is only installed in the venv
and it creates a namespace package there. Then if you try to load
oslo.config in the venv it looks in the namespace package, doesn't find
it, and bails with a missing module error. 

I'm personally running into this in tempest - I can't even run pep8 out
of the box because the sample config check fails due to missing
oslo.config. Here's what I'm seeing: 

In the tox venv: 
(pep8)[fedora@devstack site-packages]$ ls oslo*
oslo.sphinx-1.1-py2.7-nspkg.pth

oslo:
sphinx

oslo.sphinx-1.1-py2.7.egg-info:
dependency_links.txt namespace_packages.txt PKG-INFO top_level.txt
 installed-files.txt not-zip-safe SOURCES.txt 

And in the system site-packages: 
[fedora@devstack site-packages]$ ls oslo*
oslo.config.egg-link oslo.messaging.egg-link 

Since I don't actually care about oslo.sphinx in this case, I also found
that deleting it from the venv fixes the problem, but obviously that's
just a hacky workaround. My initial thought is to install oslo.sphinx in
devstack the same way as oslo.config and oslo.messaging, but I assume
there's a reason we didn't do it that way in the first place so I'm not
sure if that will work. 

So I don't know what the proper fix is, but I thought I'd share what
I've found so far. Also, I'm not sure if this even relates to the
ceilometer issue since I wouldn't expect that to be running in a venv,
but it may have a similar issue. 

I wonder if the issue is actually that we're using "pip install -e" for
oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
things work properly if those packages are installed to the global
site-packages from PyPI instead? We don't want to change the way
devstack installs them, but it would give us another data point. 

Another solution is to have a list of dependencies needed for building
documentation, separate from the tests, since oslo.sphinx isn't needed
for the tests. 

It does work if I remove the pip install -e version of oslo.config and
reinstall from the pypi package, so this appears to be an issue with the
egg-links. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Russell Bryant
On 01/08/2014 09:53 AM, John Garbutt wrote:
> On 8 January 2014 10:02, David Xie  wrote:
>> In nova/compute/api.py#2289, function resize, there's a parameter named
>> flavor_id, if it is None, it is considered as cold migration. Thus, nova
>> should skip resize verifying. However, it doesn't.
>>
>> Like Jay said, we should skip this step during cold migration, does it make
>> sense?
> 
> Not sure.
> 
>> On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau  wrote:
>>>
>>> Greetings,
>>>
>>> I have a question related to cold migration.
>>>
>>> Now in OpenStack nova, we support live migration, cold migration and
>>> resize.
>>>
>>> For live migration, we do not need to confirm after live migration
>>> finished.
>>>
>>> For resize, we need to confirm, as we want to give end user an opportunity
>>> to rollback.
>>>
>>> The problem is cold migration, because cold migration and resize share
>>> same code path, so once I submit a cold migration request and after the cold
>>> migration finished, the VM will goes to verify_resize state, and I need to
>>> confirm resize. I felt a bit confused by this, why do I need to verify
>>> resize for a cold migration operation? Why not reset the VM to original
>>> state directly after cold migration?
> 
> I think the idea was allow users/admins to check everything went OK,
> and only delete the original VM when the have confirmed the move went
> OK.
> 
> I thought there was an auto_confirm setting. Maybe you want
> auto_confirm cold migrate, but not auto_confirm resize?

I suppose we could add an API parameter to auto-confirm these things.
That's probably a good compromise.

>>> Also, I think that probably we need split compute.api.resize() to two
>>> apis: one is for resize and the other is for cold migrations.
>>>
>>> 1) The VM state can be either ACTIVE and STOPPED for a resize operation
>>> 2) The VM state must be STOPPED for a cold migrate operation.
> 
> We just stop the VM them perform the migration.
> I don't think we need to require its stopped first.
> Am I missing something?

Don't think so ... I think we should leave it as is.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Georgy Okrokvertskhov
Hi Kurt,

As for WSGI middleware I think about Pecan hooks which can be added before
actual controller call. Here is an example how we added a hook for keystone
information collection:
https://review.openstack.org/#/c/64458/4/solum/api/auth.py

What do you think, will this approach with Pecan hooks work?

Thanks
Georgy


On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths  wrote:

>  You might also consider doing this in WSGI middleware:
>
>  Pros:
>
>- Consolidates policy code in once place, making it easier to audit
>and maintain
>- Simple to turn policy on/off – just don’t insert the middleware when
>off!
>- Does not preclude the use of oslo.policy for rule checking
>- Blocks unauthorized requests before they have a chance to touch the
>web framework or app. This reduces your attack surface and can improve
>performance   (since the web framework has yet to parse the request).
>
> Cons:
>
>- Doesn't work for policies that require knowledge that isn’t
>available this early in the pipeline (without having to duplicate a lot of
>code)
>- You have to parse the WSGI environ dict yourself (this may not be a
>big deal, depending on how much knowledge you need to glean in order to
>enforce the policy).
>- You have to keep your HTTP path matching in sync with with your
>route definitions in the code. If you have full test coverage, you will
>know when you get out of sync. That being said, API routes tend to be quite
>stable in relation to to other parts of the code implementation once you
>have settled on your API spec.
>
> I’m sure there are other pros and cons I missed, but you can make your own
> best judgement whether this option makes sense in Solum’s case.
>
>   From: Doug Hellmann 
> Reply-To: OpenStack Dev 
> Date: Tuesday, January 7, 2014 at 6:54 AM
> To: OpenStack Dev 
> Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
> SecureController vs. Nova policy
>
>
>
>
> On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov <
> gokrokvertsk...@mirantis.com> wrote:
>
>> Hi Dough,
>>
>>  Thank you for pointing to this code. As I see you use OpenStack policy
>> framework but not Pecan security features. How do you implement fine grain
>> access control like user allowed to read only, writers and admins. Can you
>> block part of API methods for specific user like access to create methods
>> for specific user role?
>>
>
>  The policy enforcement isn't simple on/off switching in ceilometer, so
> we're using the policy framework calls in a couple of places within our API
> code (look through v2.py for examples). As a result, we didn't need to
> build much on top of the existing policy module to interface with pecan.
>
>  For your needs, it shouldn't be difficult to create a couple of
> decorators to combine with pecan's hook framework to enforce the policy,
> which might be less complex than trying to match the operating model of the
> policy system to pecan's security framework.
>
>  This is the sort of thing that should probably go through Oslo and be
> shared, so please consider contributing to the incubator when you have
> something working.
>
>  Doug
>
>
>
>>
>>  Thanks
>> Georgy
>>
>>
>> On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann <
>> doug.hellm...@dreamhost.com> wrote:
>>
>>>
>>>
>>>
>>>  On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov <
>>> gokrokvertsk...@mirantis.com> wrote:
>>>
  Hi,

  In Solum project we will need to implement security and ACL for Solum
 API. Currently we use Pecan framework for API. Pecan has its own security
 model based on SecureController class. At the same time OpenStack widely
 uses policy mechanism which uses json files to control access to specific
 API methods.

  I wonder if someone has any experience with implementing security and
 ACL stuff with using Pecan framework. What is the right way to provide
 security for API?

>>>
>>>   In ceilometer we are using the keystone middleware and the policy
>>> framework to manage arguments that constrain the queries handled by the
>>> storage layer.
>>>
>>>
>>> http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py
>>>
>>>  and
>>>
>>>
>>> http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337
>>>
>>>  Doug
>>>
>>>
>>>

  Thanks
  Georgy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>>   --
>> Georgy Okrokvertskhov
>> Technical Program Manager,
>> Cloud and Infrastructure Services,
>> Mirantis
>> http://www.mirantis.com
>> Tel. +1 650 963 9828
>> Mob. +1 650 996 3

Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam Kamal Malmiyoda
On Jan 8, 2014 9:58 PM, "Georgy Okrokvertskhov" <
gokrokvertsk...@mirantis.com> wrote:
>
> Hi,
>
> I do understand why there is a push back for this patch. This patch is
for infrastructure project which works for multiple projects. Infra
maintainers should not know specifics of each project in details. If this
patch is a temporary solution then who will be responsible to remove it?
>

I am not sure who is responsible for solum related configurations in infra
project. I see that almost all the infra config for solum project is done
by solum members. So I think any solum member can submit a patch to revert
this once we have a permanent solution.

> If we need start this gate I propose to revert all patches which led to
this inconsistent state and apply workaround in Solum repository which is
under Solum team full control and review. We need to open a bug in Solum
project to track this.
>

The problematic patch [1] solves a specific problem. Do we have other ways
to solve it?

Regards,
Noorul

[1] https://review.openstack.org/#/c/64226

> Thanks
> Georgy
>
>
> On Wed, Jan 8, 2014 at 7:09 AM, Noorul Islam K M 
wrote:
>>
>> Anne Gentle  writes:
>>
>> > On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda <
>> > noo...@noorul.com> wrote:
>> >
>> >>
>> >> On Jan 8, 2014 6:11 PM, "Sean Dague"  wrote:
>> >> >
>> >> > On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
>> >> > > On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
>> >> > >  wrote:
>> >> > >> Should we rather revert patch to make gate working?
>> >> > >>
>> >> > >
>> >> > > I think it is always good to have test packages reside in
>> >> > > test-requirements.txt. So -1 on reverting that patch.
>> >> > >
>> >> > > Here [1] is a temporary solution.
>> >> > >
>> >> > > Regards,
>> >> > > Noorul
>> >> > >
>> >> > > [1] https://review.openstack.org/65414
>> >> >
>> >> > If Solum is trying to be on the road to being an OpenStack project,
why
>> >> > would it go out of it's way to introduce an incompatibility in the
way
>> >> > all the actual OpenStack packages work in the gate?
>> >> >
>> >> > Seems very silly to me, because you'll have to add oslo.sphinx back
into
>> >> > test-requirements.txt the second you want to be considered for
>> >> incubation.
>> >> >
>> >>
>> >> I am not sure why it seems silly to you. We are not anyhow removing
>> >> oslo.sphinx from the repository. We are just removing it before
installing
>> >> the packages from test-requirements.txt
>> >>
>> > in the devstack gate. How does that affects incubation? Am I missing
>> >> something?
>> >>
>> >
>> > Docs are a requirement, and contributor docs are required for applying
for
>> > incubation. [1] Typically these are built through Sphinx and
consistency is
>> > gained through oslo.sphinx, also eventually we can offer consistent
>> > extensions. So a perception that you're skipping docs would be a poor
>> > reflection on your incubation application. I don't think that's what's
>> > happening here, but I want to be sure you understand the consistency
and
>> > doc needs.
>> >
>> > See also
>> >
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
>> > similar issues, we're trying to figure out the best solution. Stay
>> > tuned.
>> >
>>
>> I have seen that, also posted solum issue [1] there yesterday. I started
>> this thread to have consensus on making solum devstack gate non-voting
>> until the issue gets fixed. Also proposed a temporary solution with
>> which we can solve the issue for the time being. Since the gate is
>> failing for all the patches, it is affecting every patch.
>>
>> Regards,
>> Noorul
>>
>> [1]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023618.html
>> [2] https://review.openstack.org/65414
>>
>> >
>> >
>> > 1.
>> >
https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements
>> >
>> >> Regards,
>> >> Noorul
>> >>
>> >> > -Sean
>> >> >
>> >> > --
>> >> > Sean Dague
>> >> > Samsung Research America
>> >> > s...@dague.net / sean.da...@samsung.com
>> >> > http://dague.net
>> >> >
>> >> >
>> >> > ___
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Georgy Okrokvertskhov
> Technical Program Manager,
> C

Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 11:16 AM, Ildikó Váncsa
wrote:

>  Hi Doug,
>
>
>
> See my answers inline.
>
>
>
> Best Regards,
>
> Ildiko
>
>
>
> *From:* Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
> *Sent:* Wednesday, January 08, 2014 4:10 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>
>
>
>
>
>
>
> On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
> wrote:
>
> Hi,
>
> I've started to work on the idea of supporting a kind of tenant/project
> based configuration for Ceilometer. Unfortunately I haven't reached the
> point of having a blueprint that could be registered until now. I do not
> have a deep knowledge about the collector and compute agent services, but
> this feature would require some deep changes for sure. Currently there are
> pipelines for data collection and transformation, where the counters can be
> specified, about which data should be collected and also the time interval
> for data collection and so on. These pipelines can be configured now
> globally in the pipeline.yaml file, which is stored right next to the
> Ceilometer configuration files.
>
>
>
> Yes, the data collection was designed to be configured and controlled by
> the deployer, not the tenant. What benefits do we gain by giving that
> control to the tenant?
>
>
>
> ildikov: Sorry, my explanation was not clear. I meant there the
> configuration of data collection for projects, what was mentioned by Tim
> Bell in a previous email. This would mean that the project administrator is
> able to create a data collection configuration for his/her own project,
> which will not affect the other project’s configuration. The tenant would
> be able to specify meters (enabled/disable based on which ones are needed)
> for the given project also with project specific time intervals, etc.
>

OK, I think some of the confusion is terminology. Who is a "project
administrator"? Is that someone with access to change ceilometer's
configuration file directly? Someone with a particular role using the API?
Or something else?



>
>
>
> In my view, we could keep the dynamic meter configuration bp with
> considering to extend it to dynamic configuration of Ceilometer, not just
> the meters and we could have a separate bp for the project based
> configuration of meters.
>
>
>
> Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
> are the needs for dynamic configuration updates in ceilometer different
> from the other services?
>
>
>
> ildikov: There are some parameters in the configuration file of
> Ceilometer, like log options and notification types, which would be good to
> be able to configure them dynamically. I just wanted to reflect to that
> need. As I see, there are two options here. The first one is to identify
> the group of the dynamically modifiable parameters and move them to the API
> level. The other option could be to make some modifications in oslo.config
> too, so other services also could use the benefits of dynamic
> configuration. For example the log settings could be a good candidate, as
> for example the change of log levels, without service restart, in case
> debugging the system can be a useful feature for all of the OpenStack
> services.
>

I "misspoke" earlier. If we're talking about meters, those are actually
defined by the pipeline file (not oslo.config). So if we do want that file
re-read automatically, we can implement that within ceilometer itself,
though I'm still reluctant to say we want to provide API access for
modifying those settings. That's *really* not something we've designed the
rest of the system to accommodate, so I don't know what side-effects we
might introduce.

As far as the other configuration settings, we had the conversation about
updating those through some sort of API early on, and decided that there
are already lots of operational tools out there to manage changes to those
files. I would need to see a list of which options people would want to
have changed through an API to comment further.

Doug



>
>
> Doug
>
>
>
>
>
>
> If it is ok for you, I will register the bp for this per-project tenant
> settings with some details, when I'm finished with the initial design of
> how this feature could work.
>
> Best Regards,
> Ildiko
>
>
> -Original Message-
> From: Neal, Phil [mailto:phil.n...@hp.com]
> Sent: Tuesday, January 07, 2014 11:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>
> For multi-node deployments, implementing something like inotify would
> allow administrators to push configuration changes out to multiple targets
> using puppet/chef/etc. and have the daemons pick it up without restart.
> Thumbs up to that.
>
> As Tim Bell suggested, API-based enabling/disabling would allow users to
> update meters via script, but then there's the question of how to work out
> the glo

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from Derek Higgins's message of 2014-01-08 02:11:09 -0800:
> On 08/01/14 05:07, Clint Byrum wrote:
> > Excerpts from Fox, Kevin M's message of 2014-01-07 16:27:35 -0800:
> >> Another piece to the conversation I think is update philosophy. If
> >> you are always going to require a new image and no customization after
> >> build ever, ever, the messiness that source usually cause in the file
> >> system image really doesn't matter. The package system allows you to
> >> easily update, add, and remove packages bits at runtime cleanly. In
> >> our experimenting with OpenStack, its becoming hard to determine
> >> which philosophy is better. Golden Images for some things make a lot
> >> of sense. For other random services, the maintenance of the Golden
> >> Image seems to be too much to bother with and just installing a few
> >> packages after image start is preferable. I think both approaches are
> >> valuable. This may not directly relate to what is best for Triple-O
> >> elements, but since we are talking philosophy anyway...
> >>
> > 
> > The golden image approach should be identical to the package approach if
> > you are doing any kind of testing work-flow.
> > 
> > "Just install a few packages" is how you end up with, as Robert said,
> > "snowflakes". The approach we're taking with diskimage-builder should
> > result in that image building extremely rapidly, even if you compiled
> > those things from source.
> 
> This is the part of your argument I don't understand, creating images
> with packages is no more likely to result in snowflakes then creating
> images from sources in git.
> 
> You would build an image using packages and at the end of the build
> process you can lock the package versions. Regardless of how the image
> is built you can consider it a golden image. This image is then deployed
> to your hosts and not changed.
> 
> We would still be using diskimage-builder the main difference to the
> whole process is we would end up with a image that has more packages
> installed and no virtual envs.
> 

I'm not saying building images from packages will encourage
snowflakes. I'm saying installing and updating on systems using packages
encourages snowflakes. Kevin was suggesting that the image workflow
wouldn't fit for everything, and thus was opening up the "just install
a few packages on a system" can of worms. I'm saying to Kevin, don't
do that, just make your image work-flow tighter, and suggesting it is
worth it to do that to avoid having snowflakes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Julien Danjou
On Wed, Jan 08 2014, Ildikó Váncsa wrote:

(Your answers are very hard to read inline in my text MUA, it'd really
help if you could quote properly with > the emails you answer to).

> ildikov: Sorry, my explanation was not clear. I meant there the
> configuration of data collection for projects, what was mentioned by Tim
> Bell in a previous email. This would mean that the project administrator is
> able to create a data collection configuration for his/her own project,
> which will not affect the other project's configuration. The tenant would be
> able to specify meters (enabled/disable based on which ones are needed) for
> the given project also with project specific time intervals, etc.

I still don't see the point. A user can send any sample it wants on any
interval using the REST API. There's no sense of enable or disabling
meters.
Please describe me a real use case, for now I still can't understand
what you want to do.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Georgy Okrokvertskhov
Hi,

I do understand why there is a push back for this patch. This patch is for
infrastructure project which works for multiple projects. Infra maintainers
should not know specifics of each project in details. If this patch is a
temporary solution then who will be responsible to remove it?

If we need start this gate I propose to revert all patches which led to
this inconsistent state and apply workaround in Solum repository which is
under Solum team full control and review. We need to open a bug in Solum
project to track this.

Thanks
Georgy


On Wed, Jan 8, 2014 at 7:09 AM, Noorul Islam K M  wrote:

> Anne Gentle  writes:
>
> > On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda <
> > noo...@noorul.com> wrote:
> >
> >>
> >> On Jan 8, 2014 6:11 PM, "Sean Dague"  wrote:
> >> >
> >> > On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
> >> > > On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
> >> > >  wrote:
> >> > >> Should we rather revert patch to make gate working?
> >> > >>
> >> > >
> >> > > I think it is always good to have test packages reside in
> >> > > test-requirements.txt. So -1 on reverting that patch.
> >> > >
> >> > > Here [1] is a temporary solution.
> >> > >
> >> > > Regards,
> >> > > Noorul
> >> > >
> >> > > [1] https://review.openstack.org/65414
> >> >
> >> > If Solum is trying to be on the road to being an OpenStack project,
> why
> >> > would it go out of it's way to introduce an incompatibility in the way
> >> > all the actual OpenStack packages work in the gate?
> >> >
> >> > Seems very silly to me, because you'll have to add oslo.sphinx back
> into
> >> > test-requirements.txt the second you want to be considered for
> >> incubation.
> >> >
> >>
> >> I am not sure why it seems silly to you. We are not anyhow removing
> >> oslo.sphinx from the repository. We are just removing it before
> installing
> >> the packages from test-requirements.txt
> >>
> > in the devstack gate. How does that affects incubation? Am I missing
> >> something?
> >>
> >
> > Docs are a requirement, and contributor docs are required for applying
> for
> > incubation. [1] Typically these are built through Sphinx and consistency
> is
> > gained through oslo.sphinx, also eventually we can offer consistent
> > extensions. So a perception that you're skipping docs would be a poor
> > reflection on your incubation application. I don't think that's what's
> > happening here, but I want to be sure you understand the consistency and
> > doc needs.
> >
> > See also
> >
> http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
> > similar issues, we're trying to figure out the best solution. Stay
> > tuned.
> >
>
> I have seen that, also posted solum issue [1] there yesterday. I started
> this thread to have consensus on making solum devstack gate non-voting
> until the issue gets fixed. Also proposed a temporary solution with
> which we can solve the issue for the time being. Since the gate is
> failing for all the patches, it is affecting every patch.
>
> Regards,
> Noorul
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-January/023618.html
> [2] https://review.openstack.org/65414
>
> >
> >
> > 1.
> >
> https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements
> >
> >> Regards,
> >> Noorul
> >>
> >> > -Sean
> >> >
> >> > --
> >> > Sean Dague
> >> > Samsung Research America
> >> > s...@dague.net / sean.da...@samsung.com
> >> > http://dague.net
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Julien Danjou
On Wed, Jan 08 2014, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo) 
wrote:

> According to the latest update:
> User calls the API(1) to disable a meter along with a meter id. 

What's an user? An end-user or an operator?
I don't think we want to allow a user to disable a meter. I don't see
the use case, and if you use a meter for billing, it's just a terrible
idea.

If you talk about operators, it's just a problem of managing a
configuration file that is not different than the rest of OpenStack. I
think Doug and Chmouel already answered that, and I sit on that line.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi Doug,

See my answers inline.

Best Regards,
Ildiko

From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
Sent: Wednesday, January 08, 2014 4:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer



On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
mailto:ildiko.van...@ericsson.com>> wrote:
Hi,

I've started to work on the idea of supporting a kind of tenant/project based 
configuration for Ceilometer. Unfortunately I haven't reached the point of 
having a blueprint that could be registered until now. I do not have a deep 
knowledge about the collector and compute agent services, but this feature 
would require some deep changes for sure. Currently there are pipelines for 
data collection and transformation, where the counters can be specified, about 
which data should be collected and also the time interval for data collection 
and so on. These pipelines can be configured now globally in the pipeline.yaml 
file, which is stored right next to the Ceilometer configuration files.

Yes, the data collection was designed to be configured and controlled by the 
deployer, not the tenant. What benefits do we gain by giving that control to 
the tenant?

ildikov: Sorry, my explanation was not clear. I meant there the configuration 
of data collection for projects, what was mentioned by Tim Bell in a previous 
email. This would mean that the project administrator is able to create a data 
collection configuration for his/her own project, which will not affect the 
other project's configuration. The tenant would be able to specify meters 
(enabled/disable based on which ones are needed) for the given project also 
with project specific time intervals, etc.


In my view, we could keep the dynamic meter configuration bp with considering 
to extend it to dynamic configuration of Ceilometer, not just the meters and we 
could have a separate bp for the project based configuration of meters.

Ceilometer uses oslo.config, just like all of the rest of OpenStack. How are 
the needs for dynamic configuration updates in ceilometer different from the 
other services?

ildikov: There are some parameters in the configuration file of Ceilometer, 
like log options and notification types, which would be good to be able to 
configure them dynamically. I just wanted to reflect to that need. As I see, 
there are two options here. The first one is to identify the group of the 
dynamically modifiable parameters and move them to the API level. The other 
option could be to make some modifications in oslo.config too, so other 
services also could use the benefits of dynamic configuration. For example the 
log settings could be a good candidate, as for example the change of log 
levels, without service restart, in case debugging the system can be a useful 
feature for all of the OpenStack services.

Doug



If it is ok for you, I will register the bp for this per-project tenant 
settings with some details, when I'm finished with the initial design of how 
this feature could work.

Best Regards,
Ildiko

-Original Message-
From: Neal, Phil [mailto:phil.n...@hp.com]
Sent: Tuesday, January 07, 2014 11:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

For multi-node deployments, implementing something like inotify would allow 
administrators to push configuration changes out to multiple targets using 
puppet/chef/etc. and have the daemons pick it up without restart. Thumbs up to 
that.

As Tim Bell suggested, API-based enabling/disabling would allow users to update 
meters via script, but then there's the question of how to work out the global 
vs. per-project tenant settings...right now we collect specified meters for all 
available projects, and the API returns whatever data is stored minus filtered 
values. Maybe I'm missing something in the suggestion, but turning off 
collection for an individual project seems like it'd require some deep changes.

Vijay, I'll repeat dhellmann's request: do you have more detail in another doc? 
:-)

-   Phil

> -Original Message-
> From: Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
> [mailto:vijayakumar.kodam@nsn.com]
> Sent: Tuesday, January 07, 2014 2:49 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: chmo...@enovance.com
> Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
> From: ext Chmouel Boudjnah 
> [mailto:chmo...@enovance.com]
> Sent: Monday, January 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>
>
>
>
>
> On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata
> Consultancy Ser - FI/Espoo) 
> mailto:v

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Jay Dobies
There were so many places in this thread that I wanted to jump in on as 
I caught up, it makes sense to just summarize things in once place 
instead of a half dozen quoted replies.


I agree with the sentiments about flexibility. Regardless of my personal 
preference on source v. packages, it's been my experience that the 
general mindset of production deployment is that new ideas move slowly. 
Admins are set in their ways and policies are in place on how things are 
consumed.


Maybe the newness of all things cloud-related and image-based management 
for scale is a good time to shift the mentality out of packages (again, 
I'm not suggesting whether or not it should be shifted). But I worry 
about adoption if we don't provide an option for people to use blessed 
distro packages, either because of company policy or years of habit and 
bias. If done correctly, there's no difference between a package and a 
particular tag in a source repository, but there is a psychological 
component there that I think we need to account for, assuming someone is 
willing to bite off the implementation costs (which is sounds like there 
is).



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)


From: ext Tim Bell [mailto:tim.b...@cern.ch]
Sent: Tuesday, January 07, 2014 8:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer


Thinking using inotify/configuration file changes to implement dynamic meters, 
this would be limited to administrators of ceilometer itself (i.e. with write 
access to the file) rather than the project administrators (as defined by 
keystone roles). Thus, as a project administrator who is not the cloud admin, I 
could not enable/disable the meter for a project only.

It would mean that scripting meter on/off would not be possible if there was 
not an API to perform this.

Not sure if these requirements are significant and the associated impact on 
implementation complexity, but they may be relevant in scoping out the 
blueprint and subsequent changes

Tim
Tim,

Agree with your suggestion. I have updated the design by adding APIs. Whenever 
an API request is received by the ceilometer-api, it shall modify the config 
file and inform the ceilometer agents.
You can find detailed information at
https://etherpad.openstack.org/p/dynamic-meters

Regards,
VijayKumar

From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
Sent: 06 January 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer



On Tue, Dec 31, 2013 at 4:53 AM, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - 
FI/Espoo) mailto:vijayakumar.kodam@nsn.com>> 
wrote:
Hi,

Currently there is no way to enable or disable meters without restarting 
ceilometer.

There are cases where operators do not want to run all the meters continuously.
In these cases, there should be a way to disable or enable them dynamically.

We are working on this feature right now. I have also created a blueprint for 
the same.
https://blueprints.launchpad.net/ceilometer/+spec/dynamic-meters

We would love to hear your views on this feature.

There isn't much detail in the blueprint. Do you have a more comprehensive 
document you can link to that talks about how you intend for it to work?

Doug



Regards,
VijayKumar Kodam




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
Hi,

>-Original Message-
>From: ext Neal, Phil [mailto:phil.n...@hp.com] 
>Sent: Wednesday, January 08, 2014 12:50 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>
>
>For multi-node deployments, implementing something like inotify would allow 
>administrators to push configuration changes out to multiple targets using 
>puppet/chef/etc. and have the daemons pick it up >without restart. Thumbs up 
>to that.
>

Thanks!

>As Tim Bell suggested, API-based enabling/disabling would allow users to 
>update meters via script, but then there's the question of how to work out the 
>global vs. per-project tenant settings...right now >we collect specified 
>meters for all available projects, and the API returns whatever data is stored 
>minus filtered values. Maybe I'm missing something in the suggestion, but 
>turning off collection for >an individual project seems like it'd require some 
>deep changes.
>
>Vijay, I'll repeat dhellmann's request: do you have more detail in another 
>doc? :-)
>
>-  Phil

I concur with the opinion to use APIs for dynamically enabling/disabling 
meters. I have updated the design accordingly.

According to the latest update:
User calls the API(1) to disable a meter along with a meter id. 
Ceilometer-api handles the api request, adds the meter id to disabled_meters 
config file and informs ceilometer agents. 
Ceilometer agents will read the "disabled_meters" config file and disables the 
meter.

More detailed information about this blueprint can be found at
https://etherpad.openstack.org/p/dynamic-meters

There will be no inotify() or inotifywait() calls to monitor the modifications 
of the configuration file.
Whenever the APIs are called, ceilometer-api upon receiving the request, will 
modify the config file and informs the ceilometer agents.

There are no per-project settings currently considered for this blueprint. 
IMHO, per-project settings should be implemented for all the 
meters/resources/APIs in ceilometer and should be handled by a different 
blueprint.

Regards,
VijayKumar

>> -Original Message-
>> From: Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
>> [mailto:vijayakumar.kodam@nsn.com]
>> Sent: Tuesday, January 07, 2014 2:49 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: chmo...@enovance.com
>> Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>> From: ext Chmouel Boudjnah [mailto:chmo...@enovance.com]
>> Sent: Monday, January 06, 2014 2:19 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>> 
>> 
>> 
>> 
>> 
>> On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata Consultancy
>> Ser - FI/Espoo)  wrote:
>> 
>> In this case, simply changing the meter properties in a configuration file
>> should be enough. There should be an inotify signal which shall notify
>> ceilometer of the changes in the config file. Then ceilometer should
>> automatically update the meters without restarting.
>> 
>> 
>> 
>> Why it cannot be something configured by the admin with inotifywait(1)
>> command?
>> 
>> 
>> 
>> Or this can be an API call for enabling/disabling meters which could be more
>> useful without having to change the config files.
>> 
>> 
>> 
>> Chmouel.
>> 
>> 
>> 
>> I haven't tried inotifywait() in this implementation. I need to check if it 
>> will be
>> useful for the current implementation.
>> 
>> Yes. API call could be more useful than changing the config files manually.
>> 
>> 
>> 
>> Thanks,
>> 
>> VijayKumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Eric Windisch
>
>
>
>> About spur: spur is looks ok, but it a bit complicated inside (it uses
>> separate threads for non-blocking stdin/stderr reading [1]) and I don't
>> know how it would work with eventlet.
>>
>
> That does sound like it might cause issues. What would we need to do to
> test it?
>

Looking at the code, I don't expect it to be an issue. The monkey-patching
will cause eventlet.spawn to be called for threading.Thread. The code looks
eventlet-friendly enough on the surface. Error handing around file
read/write could be affected, but it also looks fine.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-08 Thread Dong Liu

在 2014年1月8日,20:24,Nir Yechiel  写道:

> Hi Dong,
> 
> Can you please clarify this blueprint? Currently in Neutron, If an instance 
> has a floating IP, then that will be used for both inbound and outbound 
> traffic. If an instance does not have a floating IP, it can make connections 
> out using the gateway IP (SNAT using PAT/NAT Overload). Does the idea in this 
> blueprint is to implement PAT on both directions using only the gateway IP? 
> Also, did you see this one [1]? 
> 
> Thanks,
> Nir
> 
> [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding


I think my idea is duplicated with this one. 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping

Sorry for missing this.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
2014/1/8 John Garbutt 

> On 8 January 2014 10:02, David Xie  wrote:
> > In nova/compute/api.py#2289, function resize, there's a parameter named
> > flavor_id, if it is None, it is considered as cold migration. Thus, nova
> > should skip resize verifying. However, it doesn't.
> >
> > Like Jay said, we should skip this step during cold migration, does it
> make
> > sense?
>
> Not sure.
>
> > On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau  wrote:
> >>
> >> Greetings,
> >>
> >> I have a question related to cold migration.
> >>
> >> Now in OpenStack nova, we support live migration, cold migration and
> >> resize.
> >>
> >> For live migration, we do not need to confirm after live migration
> >> finished.
> >>
> >> For resize, we need to confirm, as we want to give end user an
> opportunity
> >> to rollback.
> >>
> >> The problem is cold migration, because cold migration and resize share
> >> same code path, so once I submit a cold migration request and after the
> cold
> >> migration finished, the VM will goes to verify_resize state, and I need
> to
> >> confirm resize. I felt a bit confused by this, why do I need to verify
> >> resize for a cold migration operation? Why not reset the VM to original
> >> state directly after cold migration?
>
> I think the idea was allow users/admins to check everything went OK,
> and only delete the original VM when the have confirmed the move went
> OK.
>
> I thought there was an auto_confirm setting. Maybe you want
> auto_confirm cold migrate, but not auto_confirm resize?
>






*[Jay] John, yes, that can also reach my goal. Now we only have
resize_confirm_window to handle auto confirm without considering it is
resize or cold migration. # Automatically confirm resizes after N seconds.
Set to 0 to# disable. (integer value)#resize_confirm_window=0 *
*Perhaps we can add another parameter say cold_migrate_confirm_window to
handle confirm for cold migration.*

>
> >> Also, I think that probably we need split compute.api.resize() to two
> >> apis: one is for resize and the other is for cold migrations.
> >>
> >> 1) The VM state can be either ACTIVE and STOPPED for a resize operation
> >> 2) The VM state must be STOPPED for a cold migrate operation.
>
> We just stop the VM them perform the migration.
> I don't think we need to require its stopped first.
> Am I missing something?
>
*[Jay] Yes, but just curious why someone want to cold migrate an ACTIVE VM?
They can use live migration instead and this can also make sure the VM
migrate seamlessly.*

>
> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 9:34 AM, Sergey Skripnick wrote:

>
>
>
>>> I'd like to explore whether the paramiko team will accept this code (or
>>> something like it). This seems like a perfect opportunity for us to
>>> contribute
>>> upstream.
>>>
>>
>> +1
>>
>> The patch is not big and the code seems simple and reasonable enough
>> to live within paramiko.
>>
>> Cheers,
>> FF
>>
>>
>>
> I sent a pull request [0] but there is two things:
>
>  nobody know when (and if) it will be merged
>  it is still a bit low-level, unlike a patch in oslo
>

Let's give the paramkio devs a little time to review it.


>
> About spur: spur is looks ok, but it a bit complicated inside (it uses
> separate threads for non-blocking stdin/stderr reading [1]) and I don't
> know how it would work with eventlet.
>

That does sound like it might cause issues. What would we need to do to
test it?

Doug



>
> [0] https://github.com/paramiko/paramiko/pull/245
> [1] https://github.com/mwilliamson/spur.py/blob/master/spur/io.py#L22
>
>
> --
> Regards,
> Sergey Skripnick
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
Thanks Russell, OK, will file a bug for first issue.

For second question, I want to show some of my comments here. I think that
we should disable cold migration for an ACTIVE VM as cold migrating will
first destroy the VM then re-create the VM when using KVM, I did not see a
use case why someone want to do such a case.

Even further, this might make end user confused, its really strange both
cold migration and live migration can migrate an ACTIVE VM. Cold migration
should only target STOPPED VM instance.

What do you think?

Thanks,

Jay



2014/1/8 Russell Bryant 

> On 01/08/2014 04:52 AM, Jay Lau wrote:
> > Greetings,
> >
> > I have a question related to cold migration.
> >
> > Now in OpenStack nova, we support live migration, cold migration and
> resize.
> >
> > For live migration, we do not need to confirm after live migration
> finished.
> >
> > For resize, we need to confirm, as we want to give end user an
> > opportunity to rollback.
> >
> > The problem is cold migration, because cold migration and resize share
> > same code path, so once I submit a cold migration request and after the
> > cold migration finished, the VM will goes to verify_resize state, and I
> > need to confirm resize. I felt a bit confused by this, why do I need to
> > verify resize for a cold migration operation? Why not reset the VM to
> > original state directly after cold migration?
>
> The confirm step definitely makes more sense for the resize case.  I'm
> not sure if there was a strong reason why it was also needed for cold
> migration.
>
> If nobody comes up with a good reason to keep it, I'm fine with removing
> it.  It can't be changed in the v2 API, though.  This would be a v3 only
> change.
>
> > Also, I think that probably we need split compute.api.resize() to two
> > apis: one is for resize and the other is for cold migrations.
> >
> > 1) The VM state can be either ACTIVE and STOPPED for a resize operation
> > 2) The VM state must be STOPPED for a cold migrate operation.
>
> I'm not sure why would require different states here, though.  ACTIVE
> and STOPPED are allowed now.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam K M
Anne Gentle  writes:

> On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda <
> noo...@noorul.com> wrote:
>
>>
>> On Jan 8, 2014 6:11 PM, "Sean Dague"  wrote:
>> >
>> > On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
>> > > On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
>> > >  wrote:
>> > >> Should we rather revert patch to make gate working?
>> > >>
>> > >
>> > > I think it is always good to have test packages reside in
>> > > test-requirements.txt. So -1 on reverting that patch.
>> > >
>> > > Here [1] is a temporary solution.
>> > >
>> > > Regards,
>> > > Noorul
>> > >
>> > > [1] https://review.openstack.org/65414
>> >
>> > If Solum is trying to be on the road to being an OpenStack project, why
>> > would it go out of it's way to introduce an incompatibility in the way
>> > all the actual OpenStack packages work in the gate?
>> >
>> > Seems very silly to me, because you'll have to add oslo.sphinx back into
>> > test-requirements.txt the second you want to be considered for
>> incubation.
>> >
>>
>> I am not sure why it seems silly to you. We are not anyhow removing
>> oslo.sphinx from the repository. We are just removing it before installing
>> the packages from test-requirements.txt
>>
> in the devstack gate. How does that affects incubation? Am I missing
>> something?
>>
>
> Docs are a requirement, and contributor docs are required for applying for
> incubation. [1] Typically these are built through Sphinx and consistency is
> gained through oslo.sphinx, also eventually we can offer consistent
> extensions. So a perception that you're skipping docs would be a poor
> reflection on your incubation application. I don't think that's what's
> happening here, but I want to be sure you understand the consistency and
> doc needs.
>
> See also
> http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
> similar issues, we're trying to figure out the best solution. Stay
> tuned.
>

I have seen that, also posted solum issue [1] there yesterday. I started
this thread to have consensus on making solum devstack gate non-voting
until the issue gets fixed. Also proposed a temporary solution with
which we can solve the issue for the time being. Since the gate is
failing for all the patches, it is affecting every patch.

Regards,
Noorul

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-January/023618.html
[2] https://review.openstack.org/65414

>
>
> 1.
> https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements
>
>> Regards,
>> Noorul
>>
>> > -Sean
>> >
>> > --
>> > Sean Dague
>> > Samsung Research America
>> > s...@dague.net / sean.da...@samsung.com
>> > http://dague.net
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa wrote:

> Hi,
>
> I've started to work on the idea of supporting a kind of tenant/project
> based configuration for Ceilometer. Unfortunately I haven't reached the
> point of having a blueprint that could be registered until now. I do not
> have a deep knowledge about the collector and compute agent services, but
> this feature would require some deep changes for sure. Currently there are
> pipelines for data collection and transformation, where the counters can be
> specified, about which data should be collected and also the time interval
> for data collection and so on. These pipelines can be configured now
> globally in the pipeline.yaml file, which is stored right next to the
> Ceilometer configuration files.
>

Yes, the data collection was designed to be configured and controlled by
the deployer, not the tenant. What benefits do we gain by giving that
control to the tenant?


>
> In my view, we could keep the dynamic meter configuration bp with
> considering to extend it to dynamic configuration of Ceilometer, not just
> the meters and we could have a separate bp for the project based
> configuration of meters.
>

Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
are the needs for dynamic configuration updates in ceilometer different
from the other services?

Doug



>
> If it is ok for you, I will register the bp for this per-project tenant
> settings with some details, when I'm finished with the initial design of
> how this feature could work.
>
> Best Regards,
> Ildiko
>
> -Original Message-
> From: Neal, Phil [mailto:phil.n...@hp.com]
> Sent: Tuesday, January 07, 2014 11:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
>
> For multi-node deployments, implementing something like inotify would
> allow administrators to push configuration changes out to multiple targets
> using puppet/chef/etc. and have the daemons pick it up without restart.
> Thumbs up to that.
>
> As Tim Bell suggested, API-based enabling/disabling would allow users to
> update meters via script, but then there's the question of how to work out
> the global vs. per-project tenant settings...right now we collect specified
> meters for all available projects, and the API returns whatever data is
> stored minus filtered values. Maybe I'm missing something in the
> suggestion, but turning off collection for an individual project seems like
> it'd require some deep changes.
>
> Vijay, I'll repeat dhellmann's request: do you have more detail in another
> doc? :-)
>
> -   Phil
>
> > -Original Message-
> > From: Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
> > [mailto:vijayakumar.kodam@nsn.com]
> > Sent: Tuesday, January 07, 2014 2:49 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Cc: chmo...@enovance.com
> > Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
> > From: ext Chmouel Boudjnah [mailto:chmo...@enovance.com]
> > Sent: Monday, January 06, 2014 2:19 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
> >
> >
> >
> >
> >
> > On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata
> > Consultancy Ser - FI/Espoo)  wrote:
> >
> > In this case, simply changing the meter properties in a configuration
> > file should be enough. There should be an inotify signal which shall
> > notify ceilometer of the changes in the config file. Then ceilometer
> > should automatically update the meters without restarting.
> >
> >
> >
> > Why it cannot be something configured by the admin with inotifywait(1)
> > command?
> >
> >
> >
> > Or this can be an API call for enabling/disabling meters which could
> > be more useful without having to change the config files.
> >
> >
> >
> > Chmouel.
> >
> >
> >
> > I haven't tried inotifywait() in this implementation. I need to check
> > if it will be useful for the current implementation.
> >
> > Yes. API call could be more useful than changing the config files
> manually.
> >
> >
> >
> > Thanks,
> >
> > VijayKumar
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread James Slagle
On Tue, Jan 7, 2014 at 11:20 PM, Robert Collins
 wrote:
> On 8 January 2014 12:18, James Slagle  wrote:
>> Sure, the crux of the problem was likely that versions in the distro
>> were too old and they needed to be updated.  But unless we take on
>> building the whole OS from source/git/whatever every time, we're
>> always going to have that issue.  So, an additional benefit of
>> packages is that you can install a known good version of an OpenStack
>> component that is known to work with the versions of dependent
>> software you already have installed.
>
> The problem is that OpenStack is building against newer stuff than is
> in distros, so folk building on a packaging toolchain are going to
> often be in catchup mode - I think we need to anticipate package based
> environments running against releases rather than CD.

I just don't see anyone not building on a packaging toolchain, given
that we're all running the distro of our choice and pip/virtualenv/etc
are installed from distro packages.  Trying to isolate the building of
components with pip installed virtualenvs was still a problem.  Short
of uninstalling the build tools packages from the cloud image and then
wget'ing the pip tarball, I don't think there would have been a good
way around this particular problem.  Which, that approach may
certainly make some sense for a CD scenario.

Agreed that packages against releases makes sense.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] new (docs) requirement for third party CI

2014-01-08 Thread Matt Riedemann
I'd like to propose that we add another item to the list here [1] that 
is basically related to what happens when the 3rd party CI job votes a 
-1 on your patch.  This would include:


1. Documentation on how to analyze the results and a good overview of 
what the job does (like the docs we have for check/gate testing now).

2. How to recheck the specific job if needed, i.e. 'recheck migrations'.
3. Who to contact if you can't figure out what's going on with the job.

Ideally this information would be in the comments when the job scores a 
-1 on your patch, or at least it would leave a comment with a link to a 
wiki for that job like we have with Jenkins today.


I'm all for more test coverage but we need some solid documentation 
around that when it's not owned by the community so we know what to do 
with the results if they seem like false negatives.


If no one is against this or has something to add, I'll update the wiki.

[1] 
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread David Xie
On Wednesday, 8 January, 2014 at 22:53, John Garbutt wrote:
> On 8 January 2014 10:02, David Xie  (mailto:david.script...@gmail.com)> wrote:
> > In nova/compute/api.py#2289, function resize, there's a parameter named
> > flavor_id, if it is None, it is considered as cold migration. Thus, nova
> > should skip resize verifying. However, it doesn't.
> >  
> > Like Jay said, we should skip this step during cold migration, does it make
> > sense?
> >  
>  
>  
> Not sure.
>  
> > On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau  > (mailto:jay.lau@gmail.com)> wrote:
> > >  
> > > Greetings,
> > >  
> > > I have a question related to cold migration.
> > >  
> > > Now in OpenStack nova, we support live migration, cold migration and
> > > resize.
> > >  
> > > For live migration, we do not need to confirm after live migration
> > > finished.
> > >  
> > > For resize, we need to confirm, as we want to give end user an opportunity
> > > to rollback.
> > >  
> > > The problem is cold migration, because cold migration and resize share
> > > same code path, so once I submit a cold migration request and after the 
> > > cold
> > > migration finished, the VM will goes to verify_resize state, and I need to
> > > confirm resize. I felt a bit confused by this, why do I need to verify
> > > resize for a cold migration operation? Why not reset the VM to original
> > > state directly after cold migration?
> > >  
> >  
> >  
>  
>  
> I think the idea was allow users/admins to check everything went OK,
> and only delete the original VM when the have confirmed the move went
> OK.
>  
> I thought there was an auto_confirm setting. Maybe you want
> auto_confirm cold migrate, but not auto_confirm resize?
>  
[David] If user run cold migration command by CLI, confirmation does make 
sense. But what if this action is called by a service or other process, there’s 
no chance for user to confirm it and maybe it’s better to auto confirm it.

BTW, is there a auto_confirm setting for cold migration? If so, that’s all what 
I need.
>  
> > > Also, I think that probably we need split compute.api.resize() to two
> > > apis: one is for resize and the other is for cold migrations.
> > >  
> > > 1) The VM state can be either ACTIVE and STOPPED for a resize operation
> > > 2) The VM state must be STOPPED for a cold migrate operation.
> > >  
> >  
>  
>  
> We just stop the VM them perform the migration.
> I don't think we need to require its stopped first.
> Am I missing something?
>  
> Thanks,
> John
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-08 Thread Sean Dague

On 01/08/2014 09:48 AM, Matt Riedemann wrote:




Another question.  This patch [1] failed turbo-hipster after it was
approved but I don't know if that's a gating or just voting job, i.e.
should someone do 'reverify migrations' on that patch or just let it sit
and ignore turbo-hipster?

[1] https://review.openstack.org/#/c/59824/


So instead of trying to fix the individual runs, because t-h runs pretty 
fast, can you just fix it with bulk. It seems like the issue in a 
migration taking a long time isn't a race in OpenStack, it's completely 
variability in the underlying system.


And it seems that the failing case is going to be 100% repeatable, and 
infrequent.


So it seems like you could solve the fail side by only reporting fail 
results on 3 fails in a row: RESULT && RESULT && RESULT


Especially valid if Results are coming from different AZs, so any local 
issues should be masked.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-08 Thread Eric Windisch
On Tue, Jan 7, 2014 at 11:13 PM, Swapnil Kulkarni <
swapnilkulkarni2...@gmail.com> wrote:

> Let me know in case I can be of any help getting this resolved.
>

Please try running the failing 'docker run' command manually and without
the '-d' argument. I've been able to reproduce  an error myself, but wish
to confirm that this matches the error you're seeing.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Anne Gentle
On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda <
noo...@noorul.com> wrote:

>
> On Jan 8, 2014 6:11 PM, "Sean Dague"  wrote:
> >
> > On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
> > > On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
> > >  wrote:
> > >> Should we rather revert patch to make gate working?
> > >>
> > >
> > > I think it is always good to have test packages reside in
> > > test-requirements.txt. So -1 on reverting that patch.
> > >
> > > Here [1] is a temporary solution.
> > >
> > > Regards,
> > > Noorul
> > >
> > > [1] https://review.openstack.org/65414
> >
> > If Solum is trying to be on the road to being an OpenStack project, why
> > would it go out of it's way to introduce an incompatibility in the way
> > all the actual OpenStack packages work in the gate?
> >
> > Seems very silly to me, because you'll have to add oslo.sphinx back into
> > test-requirements.txt the second you want to be considered for
> incubation.
> >
>
> I am not sure why it seems silly to you. We are not anyhow removing
> oslo.sphinx from the repository. We are just removing it before installing
> the packages from test-requirements.txt
>
in the devstack gate. How does that affects incubation? Am I missing
> something?
>

Docs are a requirement, and contributor docs are required for applying for
incubation. [1] Typically these are built through Sphinx and consistency is
gained through oslo.sphinx, also eventually we can offer consistent
extensions. So a perception that you're skipping docs would be a poor
reflection on your incubation application. I don't think that's what's
happening here, but I want to be sure you understand the consistency and
doc needs.

See also
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
similar issues, we're trying to figure out the best solution. Stay
tuned.

Thanks,
Anne


1.
https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements

> Regards,
> Noorul
>
> > -Sean
> >
> > --
> > Sean Dague
> > Samsung Research America
> > s...@dague.net / sean.da...@samsung.com
> > http://dague.net
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Gary Kotton


From: Ray Sun mailto:xiaoq...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, January 8, 2014 4:09 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new 
VM

Gary,
Thanks. Curretly, our upload speed is in the normal range?

[Gary] #hrs for a 7G file is far too long. For testing I have a 1G image. This 
takes about 2 minutes to upload to the cache. Please note that I am running on 
a virtual setup so thing take far longer than they would if it was bare metal


Best Regards
-- Ray


On Wed, Jan 8, 2014 at 4:31 PM, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:
Hi,
In order for the VM to be booted the image needs to be on a datastore 
accessible by the host. By default the data tore will not have the image. This 
is copied from glance tot he datastore. This is most probably where the problem 
is. This may take a while depending on the connectivity between the openstack 
setup and  your backbend datastore. Once you have done this you will see a 
directory on the datastore called vmware_base. This will contain that image. 
From then on it should be smooth sailing.
Please note that we are working on a number of things to improve this:

 1.  Image cache aging (blueprint is implemented and pending review)
 2.  Adding a Vmware glance datastore – which will greatly improve the copy 
process described above

Thanks
Gary

From: Ray Sun mailto:xiaoq...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, January 8, 2014 4:30 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

Stackers,
I tried to create a new VM using the driver VMwareVCDriver, but I found it's 
very slow when I try to create a new VM, for example, 7GB Windows Image spent 3 
hours.

Then I tried to use curl to upload a iso to vcenter directly.

curl -H "Expect:" -v --insecure --upload-file windows2012_server_cn_x64.iso 
"https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenter&dsName=datastore2"

The average speed is 0.8 MB/s.

Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

I am not sure if there any special configurations for web interface for 
vcenter. Please help.

Best Regards
-- Ray

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread John Garbutt
On 8 January 2014 10:02, David Xie  wrote:
> In nova/compute/api.py#2289, function resize, there's a parameter named
> flavor_id, if it is None, it is considered as cold migration. Thus, nova
> should skip resize verifying. However, it doesn't.
>
> Like Jay said, we should skip this step during cold migration, does it make
> sense?

Not sure.

> On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau  wrote:
>>
>> Greetings,
>>
>> I have a question related to cold migration.
>>
>> Now in OpenStack nova, we support live migration, cold migration and
>> resize.
>>
>> For live migration, we do not need to confirm after live migration
>> finished.
>>
>> For resize, we need to confirm, as we want to give end user an opportunity
>> to rollback.
>>
>> The problem is cold migration, because cold migration and resize share
>> same code path, so once I submit a cold migration request and after the cold
>> migration finished, the VM will goes to verify_resize state, and I need to
>> confirm resize. I felt a bit confused by this, why do I need to verify
>> resize for a cold migration operation? Why not reset the VM to original
>> state directly after cold migration?

I think the idea was allow users/admins to check everything went OK,
and only delete the original VM when the have confirmed the move went
OK.

I thought there was an auto_confirm setting. Maybe you want
auto_confirm cold migrate, but not auto_confirm resize?

>> Also, I think that probably we need split compute.api.resize() to two
>> apis: one is for resize and the other is for cold migrations.
>>
>> 1) The VM state can be either ACTIVE and STOPPED for a resize operation
>> 2) The VM state must be STOPPED for a cold migrate operation.

We just stop the VM them perform the migration.
I don't think we need to require its stopped first.
Am I missing something?

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Sean Dague

On 01/08/2014 09:26 AM, Noorul Islam Kamal Malmiyoda wrote:


On Jan 8, 2014 6:11 PM, "Sean Dague" mailto:s...@dague.net>> wrote:
 >
 > On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
 > > On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
 > > mailto:gokrokvertsk...@mirantis.com>> wrote:
 > >> Should we rather revert patch to make gate working?
 > >>
 > >
 > > I think it is always good to have test packages reside in
 > > test-requirements.txt. So -1 on reverting that patch.
 > >
 > > Here [1] is a temporary solution.
 > >
 > > Regards,
 > > Noorul
 > >
 > > [1] https://review.openstack.org/65414
 >
 > If Solum is trying to be on the road to being an OpenStack project, why
 > would it go out of it's way to introduce an incompatibility in the way
 > all the actual OpenStack packages work in the gate?
 >
 > Seems very silly to me, because you'll have to add oslo.sphinx back into
 > test-requirements.txt the second you want to be considered for
incubation.
 >

I am not sure why it seems silly to you. We are not anyhow removing
oslo.sphinx from the repository. We are just removing it before
installing the packages from test-requirements.txt in the devstack gate.
How does that affects incubation? Am I missing something?


So maybe I'm missing something. I don't see how the patch in question or 
the mailing list thread is related to the solum fail. Perhaps being more 
specific about why removing oslo.sphinx from test-requirements.txt is 
the right work around would be good. Because the nature of the fix (hot 
patching requirements) means that by nature something is not working as 
designed.


As far as I can tell this is just an ordering issue. So figure out the 
correct order that things need to happen in.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Russell Bryant
On 01/08/2014 04:52 AM, Jay Lau wrote:
> Greetings,
> 
> I have a question related to cold migration.
> 
> Now in OpenStack nova, we support live migration, cold migration and resize.
> 
> For live migration, we do not need to confirm after live migration finished.
> 
> For resize, we need to confirm, as we want to give end user an
> opportunity to rollback.
> 
> The problem is cold migration, because cold migration and resize share
> same code path, so once I submit a cold migration request and after the
> cold migration finished, the VM will goes to verify_resize state, and I
> need to confirm resize. I felt a bit confused by this, why do I need to
> verify resize for a cold migration operation? Why not reset the VM to
> original state directly after cold migration?

The confirm step definitely makes more sense for the resize case.  I'm
not sure if there was a strong reason why it was also needed for cold
migration.

If nobody comes up with a good reason to keep it, I'm fine with removing
it.  It can't be changed in the v2 API, though.  This would be a v3 only
change.

> Also, I think that probably we need split compute.api.resize() to two
> apis: one is for resize and the other is for cold migrations.
> 
> 1) The VM state can be either ACTIVE and STOPPED for a resize operation
> 2) The VM state must be STOPPED for a cold migrate operation.

I'm not sure why would require different states here, though.  ACTIVE
and STOPPED are allowed now.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-08 Thread Matt Riedemann



On Tuesday, January 07, 2014 4:53:01 PM, Michael Still wrote:

Hi. Thanks for reaching out about this.

It seems this patch has now passed turbo hipster, so I am going to
treat this as a more theoretical question than perhaps you intended. I
should note though that Joshua Hesketh and I have been trying to read
/ triage every turbo hipster failure, but that has been hard this week
because we're both at a conference.

The problem this patch faced is that we are having trouble defining
what is a reasonable amount of time for a database migration to run
for. Specifically:

2014-01-07 14:59:32,012 [output] 205 -> 206...
2014-01-07 14:59:32,848 [heartbeat]
2014-01-07 15:00:02,848 [heartbeat]
2014-01-07 15:00:32,849 [heartbeat]
2014-01-07 15:00:39,197 [output] done

So applying migration 206 took slightly over a minute (67 seconds).
Our historical data (mean + 2 standard deviations) says that this
migration should take no more than 63 seconds. So this only just
failed the test.

However, we know there are issues with our methodology -- we've tried
normalizing for disk IO bandwidth and it hasn't worked out as well as
we'd hoped. This week's plan is to try to use mysql performance schema
instead, but we have to learn more about how it works first.

I apologise for this mis-vote.

Michael

On Wed, Jan 8, 2014 at 1:44 AM, Matt Riedemann
 wrote:



On 12/30/2013 6:21 AM, Michael Still wrote:


Hi.

The purpose of this email to is apologise for some incorrect -1 review
scores which turbo hipster sent out today. I think its important when
a third party testing tool is new to not have flakey results as people
learn to trust the tool, so I want to explain what happened here.

Turbo hipster is a system which takes nova code reviews, and runs
database upgrades against them to ensure that we can still upgrade for
users in the wild. It uses real user datasets, and also times
migrations and warns when they are too slow for large deployments. It
started voting on gerrit in the last week.

Turbo hipster uses zuul to learn about reviews in gerrit that it
should test. We run our own zuul instance, which talks to the
openstack.org zuul instance. This then hands out work to our pool of
testing workers. Another thing zuul does is it handles maintaining a
git repository for the workers to clone from.

This is where things went wrong today. For reasons I can't currently
explain, the git repo on our zuul instance ended up in a bad state (it
had a patch merged to master which wasn't in fact merged upstream
yet). As this code is stock zuul from openstack-infra, I have a
concern this might be a bug that other zuul users will see as well.

I've corrected the problem for now, and kicked off a recheck of any
patch with a -1 review score from turbo hipster in the last 24 hours.
I'll talk to the zuul maintainers tomorrow about the git problem and
see what we can learn.

Thanks heaps for your patience.

Michael



How do I interpret the warning and -1 from turbo-hipster on my patch here
[1] with the logs here [2]?

I'm inclined to just do 'recheck migrations' on this since this patch
doesn't have anything to do with this -1 as far as I can tell.

[1] https://review.openstack.org/#/c/64725/4/
[2]
https://ssl.rcbops.com/turbo_hipster/logviewer/?q=/turbo_hipster/results/64/64725/4/check/gate-real-db-upgrade_nova_mysql_user_001/5186e53/user_001.log

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






Another question.  This patch [1] failed turbo-hipster after it was 
approved but I don't know if that's a gating or just voting job, i.e. 
should someone do 'reverify migrations' on that patch or just let it 
sit and ignore turbo-hipster?


[1] https://review.openstack.org/#/c/59824/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Sergey Skripnick





I'd like to explore whether the paramiko team will accept this code (or
something like it). This seems like a perfect opportunity for us to  
contribute

upstream.


+1

The patch is not big and the code seems simple and reasonable enough
to live within paramiko.

Cheers,
FF




I sent a pull request [0] but there is two things:

 nobody know when (and if) it will be merged
 it is still a bit low-level, unlike a patch in oslo

About spur: spur is looks ok, but it a bit complicated inside (it uses
separate threads for non-blocking stdin/stderr reading [1]) and I don't
know how it would work with eventlet.

[0] https://github.com/paramiko/paramiko/pull/245
[1] https://github.com/mwilliamson/spur.py/blob/master/spur/io.py#L22

--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly meeting Thursday 09.01.2014

2014-01-08 Thread Eugene Nikanorov
Hi neutrons,

Lets continue keeping our regular lbaas meetings. Let's gather on
#openstack-meeting at 14-00 UTC on this Thursday, 09.01.2014.

We'll discuss our progress and future plans.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec  wrote:

>  On 2014-01-07 07:16, Doug Hellmann wrote:
>
>
>
>
> On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin wrote:
>
>>  I have been seeing this problem also.
>>
>> My problem is actually with oslo.sphinx. I ran sudo pip install -r
>> test-requirements.txt in cinder so that I could run the tests there, which
>> installed oslo.sphinx.
>>
>> Strange thing is that the oslo.sphinx installed a directory called oslo
>> in /usr/local/lib/python2.7/dist-packages with no __init__.py file. With
>> this package installed like so I get the same error you get with
>> oslo.config.
>>
>
>  The oslo libraries use python namespace packages, which manifest
> themselves as a directory in site-packages (or dist-packages) with
> sub-packages but no __init__.py(c). That way oslo.sphinx and oslo.config
> can be packaged separately, but still installed under the "oslo" directory
> and imported as oslo.sphinx and oslo.config.
>
> My guess is that installing oslo.sphinx globally (with sudo), set up 2
> copies of the namespace package (one in the global dist-packages and
> presumably one in the virtualenv being used for the tests).
>
>   Actually I think it may be the opposite problem, at least where I'm
> currently running into this.  oslo.sphinx is only installed in the venv and
> it creates a namespace package there.  Then if you try to load oslo.config
> in the venv it looks in the namespace package, doesn't find it, and bails
> with a missing module error.
>
> I'm personally running into this in tempest - I can't even run pep8 out of
> the box because the sample config check fails due to missing oslo.config.
> Here's what I'm seeing:
>
> In the tox venv:
> (pep8)[fedora@devstack site-packages]$ ls oslo*
> oslo.sphinx-1.1-py2.7-nspkg.pth
>
> oslo:
> sphinx
>
> oslo.sphinx-1.1-py2.7.egg-info:
> dependency_links.txt  namespace_packages.txt  PKG-INFO top_level.txt
> installed-files.txt   not-zip-safeSOURCES.txt
>
>
> And in the system site-packages:
> [fedora@devstack site-packages]$ ls oslo*
> oslo.config.egg-link  oslo.messaging.egg-link
>
>
> Since I don't actually care about oslo.sphinx in this case, I also found
> that deleting it from the venv fixes the problem, but obviously that's just
> a hacky workaround.  My initial thought is to install oslo.sphinx in
> devstack the same way as oslo.config and oslo.messaging, but I assume
> there's a reason we didn't do it that way in the first place so I'm not
> sure if that will work.
>
> So I don't know what the proper fix is, but I thought I'd share what I've
> found so far.  Also, I'm not sure if this even relates to the ceilometer
> issue since I wouldn't expect that to be running in a venv, but it may have
> a similar issue.
>

I wonder if the issue is actually that we're using "pip install -e" for
oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
things work properly if those packages are installed to the global
site-packages from PyPI instead? We don't want to change the way devstack
installs them, but it would give us another data point.

Another solution is to have a list of dependencies needed for building
documentation, separate from the tests, since oslo.sphinx isn't needed for
the tests.

Doug



>
> -Ben
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam Kamal Malmiyoda
On Jan 8, 2014 6:11 PM, "Sean Dague"  wrote:
>
> On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
> > On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
> >  wrote:
> >> Should we rather revert patch to make gate working?
> >>
> >
> > I think it is always good to have test packages reside in
> > test-requirements.txt. So -1 on reverting that patch.
> >
> > Here [1] is a temporary solution.
> >
> > Regards,
> > Noorul
> >
> > [1] https://review.openstack.org/65414
>
> If Solum is trying to be on the road to being an OpenStack project, why
> would it go out of it's way to introduce an incompatibility in the way
> all the actual OpenStack packages work in the gate?
>
> Seems very silly to me, because you'll have to add oslo.sphinx back into
> test-requirements.txt the second you want to be considered for incubation.
>

I am not sure why it seems silly to you. We are not anyhow removing
oslo.sphinx from the repository. We are just removing it before installing
the packages from test-requirements.txt in the devstack gate. How does that
affects incubation? Am I missing something?

Regards,
Noorul

> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Ray Sun
Gary,
Thanks. Curretly, our upload speed is in the normal range?

Best Regards
-- Ray


On Wed, Jan 8, 2014 at 4:31 PM, Gary Kotton  wrote:

> Hi,
> In order for the VM to be booted the image needs to be on a datastore
> accessible by the host. By default the data tore will not have the image.
> This is copied from glance tot he datastore. This is most probably where
> the problem is. This may take a while depending on the connectivity between
> the openstack setup and  your backbend datastore. Once you have done this
> you will see a directory on the datastore called vmware_base. This will
> contain that image. From then on it should be smooth sailing.
> Please note that we are working on a number of things to improve this:
>
>1. Image cache aging (blueprint is implemented and pending review)
>2. Adding a Vmware glance datastore – which will greatly improve the
>copy process described above
>
> Thanks
> Gary
>
> From: Ray Sun 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, January 8, 2014 4:30 AM
> To: OpenStack Dev 
> Subject: [openstack-dev] [Nova][Vmware]Bad Performance when creating a
> new VM
>
> Stackers,
> I tried to create a new VM using the driver VMwareVCDriver, but I found
> it's very slow when I try to create a new VM, for example, 7GB Windows
> Image spent 3 hours.
>
> Then I tried to use curl to upload a iso to vcenter directly.
>
> curl -H "Expect:" -v --insecure --upload-file
> windows2012_server_cn_x64.iso "
> https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenter&dsName=datastore2
> "
>
> The average speed is 0.8 MB/s.
>
> Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.
>
> I am not sure if there any special configurations for web interface for
> vcenter. Please help.
>
> Best Regards
> -- Ray
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >