Re: [openstack-dev] [Infra][Solum][Mistral] New class of requirements for Stackforge projects

2014-06-26 Thread Adrian Otto
Ok,

I submitted and abandoned a couple of reviews[1][2] for a solution aimed to 
meet my goals without adding a new per-project requirements file. The flaw with 
this approach is that pip may install other requirements when installing the 
one(s) loaded from the fallback mirror, and those may conflict with the ones 
loaded from the primary mirror.

After discussing this further in #openstack-infra this evening, we should give 
serious consideration to adding python-mistralclient to global requirements. I 
have posted a review[3] for that to get input from the requirements review team.

Thanks,

Adrian

[1] https://review.openstack.org/102716
[2] https://review.openstack.org/102719
[3] https://review.openstack.org/102738https://review.openstack.org/1027387

On Jun 25, 2014, at 9:51 PM, Matthew Oliver 
m...@oliver.net.aumailto:m...@oliver.net.au wrote:


On Jun 26, 2014 12:12 PM, Angus Salkeld 
angus.salk...@rackspace.commailto:angus.salk...@rackspace.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 25/06/14 15:13, Clark Boylan wrote:
  On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto 
  adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
  Hello,
 
  Solum has run into a constraint with the current scheme for requirements 
  management within the OpenStack CI system. We have a proposal for dealing 
  with this constraint that involves making a contribution to 
  openstack-infra. This message explains the constraint, and our proposal 
  for addressing it.
 
  == Background ==
 
  OpenStack uses a list of global requirements in the requirements repo[1], 
  and each project has it’s own requirements.txt and test-requirements.txt 
  files. The requirements are satisfied by gate jobs using pip configured to 
  use the pypi.openstack.orghttp://pypi.openstack.org/ mirror, which is 
  periodically updated with new content from 
  pypi.python.orghttp://pypi.python.org/. One motivation for doing this is 
  that pypi.python.orghttp://pypi.python.org/ may not be as fast or as 
  reliable as a local mirror. The gate/check jobs for the projects use the 
  OpenStack internal pypi mirror to ensure stability.
 
  The OpenStack CI system will sync up the requirements across all the 
  official projects and will create reviews in the participating projects 
  for any mis-matches. Solum is one of these projects, and enjoys this 
  feature.
 
  Another motivation is so that users of OpenStack will have one single set 
  of python package requirements/dependencies to install and run the 
  individual OpenStack components.
 
  == Problem ==
 
  Stackforge projects listed in openstack/requirements/projects.txt that 
  decide to depend on each other (for example, Solum wanting to list 
  mistralclient as a requirement) are unable to, because they are not yet 
  integrated, and are not listed in 
  openstack/requirements/global-requirements.txt yet. This means that in 
  order to depend on each other, a project must withdraw from projects.txt 
  and begin using pip with pypi.poython.orghttp://pypi.poython.org/ to 
  satisfy all of their requirements.I strongly dislike this option.
 
  Mistral is still evolving rapidly, and we don’t think it makes sense for 
  them to pursue integration wight now. The upstream distributions who 
  include packages to support OpenStack will also prefer not to deal with a 
  requirement that will be cutting a new version every week or two in order 
  to satisfy evolving needs as Solum and other consumers of Mistral help 
  refine how it works.
 
  == Proposal ==
 
  We want the best of both worlds. We want the freedom to innovate and use 
  new software for a limited selection of stackforge projects, and still use 
  the OpenStack pypi server to satisfy my regular requirements. We want the 
  speed and reliability of using our local mirror, and users of Solum to use 
  a matching set of requirements for all the things that we use, and 
  integrated projects use. We want to continue getting the reviews that 
  bring us up to date with new requirements versions.
 
  We propose that we submit an enhancement to the gate/check job setup that 
  will:
 
  1) Begin (as it does today) by satisfying global-requirements.txt and my 
  local project’s requirements.txt and test-requirements.txt using the local 
  OpenStack pypi mirror.
  2) After all requirements are satisfied, check the name of my project. If 
  it begins with ‘stackforge/‘ then look for a stackforge-requirements.txt 
  file. If one exists, reconfigure pip to switch to use 
  pypi.python.orghttp://pypi.python.org/, and satisfy the requirements 
  listed in the file. We will list mistralclient there, and get the latest 
  tagged/released version of that.
 
  I am reasonably sure that if you remove yourself from the
  openstack/requirements project list this is basically how it will
  work. Pip is configured to use the OpenStack mirror and fall back on
  pypi.python.orghttp://pypi.python.org/ for packages not available on 

Re: [openstack-dev] Creating new python-new_project_nameclient

2014-06-26 Thread Jamie Lennox
On Wed, 2014-06-25 at 22:42 -0500, Dean Troyer wrote:
 On Wed, Jun 25, 2014 at 10:18 PM, Aaron Rosen aaronoro...@gmail.com
 wrote:
 I'm looking at creating a new python-new_project_nameclient
 and I was wondering if there was any on going effort to share
 code between the clients or not? I've looked at the code in
 python-novaclient and python-neutronclient and both of them
 seem to have their own homegrown HTTPClient and keystone
 integration. Figured I'd ping the mailing list here before I
 go on and make my own homegrown HTTPClient as well. 
 
 
 For things in the library level of a client please consider using
 keystoneclient's fairly new session layer as the basis of your HTTP
 layer.  That will also give you access to the new style auth plugins,
 assuming you want to do Keystone auth with this client.
 
 
 I'm not sure if Jamie has any examples of using this without leaning
 on the backward-compatibility bits that the existing clients need.
 
 
 The Python SDK is being built on a similar Session layer (without the
 backeard compat bits).
 
 
 dt 

I'd love to suggest following in the footsteps of the SDK, but it's just
a little early for that. 

Today the best thing i think would be to use the session from
keystoneclient, and copy and paste the adapter:
https://review.openstack.org/#/c/86237/ which is approved but not in a
release yet. A client object takes a session and kwargs and creates an
adapter with them.

Then reuse the managers from
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/apiclient/base.py
I'm personally not a fan, but most of the other clients use this layout
and I assume you're more interested in getting things done in a standard
way than arguing over client design (If you are interested in that the
SDK project could always use a hand). Pass the adapter to the managers. 

Don't write a CLI, you can extend OSC to handle your new service. There
are no docs for it (that i'm aware of) but the included services all use
the plugin interface so you can copy from one of those. 

I don't have a pure example of these things, but if any of the above is
unclear feel free to find me on IRC and i'll step you through it.

Jamie

 
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-26 Thread Martin Geisler
Sean Dague s...@dague.net writes:

 On 06/25/2014 07:53 AM, Martin Geisler wrote:
 Sean Dague s...@dague.net writes:
 
 I've only submitted some small trivial patches. As far as I could
 tell, Gerrit triggered a full test cycle when I just changed the
 commit message. That surprised me and made the reviews more
 time-consuming, especially because Jenkins would fail fairly often
 because of what looks like heisenbugs to me.

 We track them here - http://status.openstack.org/elastic-recheck/ -
 help always appreciated in fixing them. Most of them are actually race
 conditions that exist in OpenStack.

I should have said that I did meet that page. I tried looking for a
relevant bug, but I didn't manage to match the error messages I with the
known bugs.

I just counted the mails from Jenkins and I see 61 mails with build
succeeded and 12 mails with build failed. This is for patches that only
touch a comment line in the beginning of the file:

  https://review.openstack.org/#/q/owner:martin%2540geisler.net,n,z

That looks like a 20% failure rate. Is this normal or have I been extra
unlucky? :)

 I think optimizing the zuul path for commit message only changes would
 be useful. Today the pipeline only knows that there was a change.
 That's not something anyone's gotten around to yet.

Okay, thanks for the explanation.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpAEArX9gUsn.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron-specs[master]: BGP dynamic routing

2014-06-26 Thread Jaume Devesa
Hi Keshava,

I'm afraid that your use case is out of the scope of this bp. We have
thought only to provide admin API to share external/provider network
routes. Due IP ranges overlapping, share tenant networks won't be able with
my approach.

However, I find that after BGP will be implemented, with a few changes (as
let the user create RoutingInstance entities, add tenant_id field in that
entity and use underlaying VPN technology), your use case would be
feasible. Take a look at Nachi Ueno's spec:
https://review.openstack.org/#/c/93329/ I think is so close to your
proposal.

Regards,
jaume


On 25 June 2014 19:51, A, Keshava keshav...@hp.com wrote:

  Hi,



  If BGP can export its private-IP to upstream they can be placed in the
 corresponding VRF table w.r.t that VPN.

 In the below example Geographically separated Data-center hosts same
 Customers   (Red, Blue Customer).

 If BGP from virtual Network exports its IP to upstream  BGP VRF table (of
 that VPN), then from there  using carrier’s carrier mechanism , BGP will
 carry respective VRF route entries to remote site BGP Routing table.

 During packet forwarding from remote-site , the remote router will look
 into its corresponding VRF table entries to check where is the destination
 for that IP.

 Based on the packet will be forwarded on respective VPN tunnel .



 *So private-IP should be advertised by BGP from the cloud under  that
 respective VRF (VPN).*



 Thanks  regards,

 Keshava



 -Original Message-
 From: Jaume Devesa (Code Review) [mailto:rev...@openstack.org]
 Sent: Wednesday, June 25, 2014 3:06 PM
 To: Artem Dmytrenko
 Cc: Baldwin, Carl (OpenStack Neutron); mark mcclain; Sean M. Collins;
 Anthony Veiga; Pedro Marques; Nachi Ueno; YAMAMOTO Takashi; Itsuro Oda;
 fumihiko kakuma; A, Keshava
 Subject: Change in openstack/neutron-specs[master]: BGP dynamic routing



 Jaume Devesa has posted comments on this change.



 Change subject: BGP dynamic routing

 ..





 Patch Set 8:



 (3 comments)



 Hi keshava,



 I've tried to answer your questions, but I am not confident about your use
 case. Can you explain me, please? Or ping me anytime at irc (my nickname is
 devvesa)



 https://review.openstack.org/#/c/90833/8/specs/juno/bgp-dynamic-routing.rst

 File specs/juno/bgp-dynamic-routing.rst:



 Line 120: namespace in the compute node.

  Please make it clear: If DVR is turned-on BGP can be enabled or not ?

 BGP can be enabled. However, I think this paragraph crosses concepts that
 may confuse people. I will delete in the next patch.





 Line 123: to an upstream router. It does not require learning routes from
 the upstream

  Is it possible how to block learning from upstream peer router ?

 Yes, is absolutely configurable. If you see the RoutingInstance entity
 down below, you will see that you can choose either you want to learn or
 advertise the routes .





 Line 159: Overview

  Is it possible to enabled BGP only on private IP address ?

 With the RoutingInstance entity, you put together:



 1. Which peers you want to connect

 2. Which networks are involved

 3. Whether to enable/disable discovery AND advertise of routes.



 Also, you will be able to add advertise routes manually.



 So theoretically, you will be able to advertise a private IP address,
 although I can not see the use case of this.





 --

 To view, visit https://review.openstack.org/90833

 To unsubscribe, visit https://review.openstack.org/settings



 Gerrit-MessageType: comment

 Gerrit-Change-Id: I41b66c1c3083d7c8205368353302fafdb7a110c8

 Gerrit-PatchSet: 8

 Gerrit-Project: openstack/neutron-specs

 Gerrit-Branch: master

 Gerrit-Owner: Artem Dmytrenko nexton...@yahoo.com

 Gerrit-Reviewer: Anthony Veiga anthony_ve...@cable.comcast.com

 Gerrit-Reviewer: Artem Dmytrenko nexton...@yahoo.com

 Gerrit-Reviewer: Carl Baldwin carl.bald...@hp.com

 Gerrit-Reviewer: Itsuro Oda o...@valinux.co.jp

 Gerrit-Reviewer: Jaume Devesa devv...@gmail.com

 Gerrit-Reviewer: Jenkins

 Gerrit-Reviewer: Nachi Ueno na...@ntti3.com

 Gerrit-Reviewer: Pedro Marques pedro.r.marq...@gmail.com

 Gerrit-Reviewer: Sean M. Collins sean_colli...@cable.comcast.com

 Gerrit-Reviewer: YAMAMOTO Takashi yamam...@valinux.co.jp

 Gerrit-Reviewer: fumihiko kakuma kak...@valinux.co.jp

 Gerrit-Reviewer: keshava keshav...@hp.com

 Gerrit-Reviewer: mark mcclain mmccl...@yahoo-inc.com

 Gerrit-HasComments: Yes




-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-26 Thread wu jiang
 Hi Phil,

thanks for your reply. So should I need to submit a patch/spec to add it
now?


On Wed, Jun 25, 2014 at 5:53 PM, Day, Phil philip@hp.com wrote:

  Looking at this a bit deeper the comment in _*start*_buidling() says
 that its doing this to “Save the host and launched_on fields and log
 appropriately “.But as far as I can see those don’t actually get set
 until the claim is made against the resource tracker a bit later in the
 process, so this whole update might just be not needed – although I still
 like the idea of a state to show that the request has been taken off the
 queue by the compute manager.



 *From:* Day, Phil
 *Sent:* 25 June 2014 10:35

 *To:* OpenStack Development Mailing List
 *Subject:* RE: [openstack-dev] [nova] Why is there a 'None' task_state
 between 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?



 Hi WingWJ,



 I agree that we shouldn’t have a task state of None while an operation is
 in progress.  I’m pretty sure back in the day this didn’t use to be the
 case and task_state stayed as Scheduling until it went to Networking  (now
 of course networking and BDM happen in parallel, so you have to be very
 quick to see the Networking state).



 Personally I would like to see the extra granularity of knowing that a
 request has been started on the compute manager (and knowing that the
 request was started rather than is still sitting on the queue makes the
 decision to put it into an error state when the manager is re-started more
 robust).



 Maybe a task state of “STARTING_BUILD” for this case ?



 BTW I don’t think _*start*_building() is called anymore now that we’ve
 switched to conductor calling build_and_run_instance() – but the same
 task_state issue exist in there well.



 *From:* wu jiang [mailto:win...@gmail.com win...@gmail.com]
 *Sent:* 25 June 2014 08:19
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [nova] Why is there a 'None' task_state
 between 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?



 Hi all,



 Recently, some of my instances were stuck in task_state 'None' during VM
 creation in my environment.



 So I checked  found there's a 'None' task_state between 'SCHEDULING' 
 'BLOCK_DEVICE_MAPPING'.



 The related codes are implemented like this:



 #def _start_building():

 #self._instance_update(context, instance['uuid'],

 #  vm_state=vm_states.BUILDING,

 #  task_state=None,

 #  expected_task_state=(task_states.SCHEDULING,

 #   None))



 So if compute node is rebooted after that procession, all building VMs on
 it will always stay in 'None' task_state. And it's useless and not
 convenient for locating problems.



 Why not a new task_state for this step?





 WingWJ

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Barebones CA

2014-06-26 Thread Clark, Robert Graham

On 26/06/2014 03:43, Nathan Kinder nkin...@redhat.com wrote:



On 06/25/2014 02:42 PM, Clark, Robert Graham wrote:
 
 Ok, I’ll hack together a dev plugin over the next week or so, other work
 notwithstanding. Where possible I’ll probably borrow from the dog tag
 plugin as I’ve not looked closely at the plugin infrastructure in
Barbican
 recently.

My understanding is that Barbican's plugin interface is currently in the
midst of a redesign, so be careful not to copy something that will be
changing shortly.

-NGK

Good point, thanks Nathan, I’ll try to keep the ‘do-poi-stuff’ bit nicely
decoupled from the ‘barbican’ bit.


 
 Is this something you’d like a blueprint for first?
 
 -Rob
 
 
 
 
 On 25/06/2014 18:30, Ade Lee a...@redhat.com wrote:
 
 I think the plan is to create a Dogtag instance so that integration
 tests can be run whenever code is checked in (both with and without a
 Dogtag backend).

 Dogtag isn't that difficult to deploy, but being a Java app, it does
 bring in a set of dependencies that developers may not want to deal
with
 for basic/ devstack testing.

 So, I agree that a simple OpenSSL CA may be useful at least initially
as
 a 'dev' plugin.

 Ade

 On Wed, 2014-06-25 at 16:31 +, Jarret Raim wrote:
 Rob,

 RedHat is working on a backend for Dogtag, which should be capable of
 doing something like that. That's still a bit hard to deploy, so it
 would
 make sense to extend the 'dev' plugin to include those features.


 Jarret


 On 6/24/14, 4:04 PM, Clark, Robert Graham robert.cl...@hp.com
wrote:

 Yeah pretty much.

 That¹s something I¹d be interested to work on, if work isn¹t ongoing
 already.

 -Rob





 On 24/06/2014 18:57, John Wood john.w...@rackspace.com wrote:

 Hello Robert,

 I would actually hope we have a self-contained certificate plugin
 implementation that runs 'out of the box' to enable certificate
 generation orders to be evaluated and demo-ed on local boxes.

 Is this what you were thinking though?

 Thanks,
 John



 
 From: Clark, Robert Graham [robert.cl...@hp.com]
 Sent: Tuesday, June 24, 2014 10:36 AM
 To: OpenStack List
 Subject: [openstack-dev] [Barbican] Barebones CA

 Hi all,

 I¹m sure this has been discussed somewhere and I¹ve just missed it.

 Is there any value in creating a basic ŒCA¹ and plugin to satisfy
 tests/integration in Barbican? I¹m thinking something that probably
 performs OpenSSL certificate operations itself, ugly but perhaps
 useful
 for some things?

 -Rob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-06-26 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate. 

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-26 Thread Jan Provaznik

On 06/25/2014 06:58 PM, Giulio Fidente wrote:

On 06/16/2014 11:14 PM, Clint Byrum wrote:

Excerpts from Gregory Haynes's message of 2014-06-16 14:04:19 -0700:

Excerpts from Jan Provazník's message of 2014-06-16 20:28:29 +:

Hi,
MariaDB is now included in Fedora repositories, this makes it easier to
install and more stable option for Fedora installations. Currently
MariaDB can be used by including mariadb (use mariadb.org pkgs) or
mariadb-rdo (use redhat RDO pkgs) element when building an image. What
do you think about using MariaDB as default option for Fedora when
running devtest scripts?


(first, I believe Jan means that MariaDB _Galera_ is now in Fedora)


I think so too.


Id like to give this a try. This does start to change us from being a
deployment of openstck to being a deployment per distro but IMO thats a
reasonable position.

Id also like to propose that if we decide against doing this then these
elements should not live in tripleo-image-elements.


I'm not so sure I agree. We have lio and tgt because lio is on RHEL but
everywhere else is still using tgt IIRC.

However, I also am not so sure that it is actually a good idea for people
to ship on MariaDB since it is not in the gate. As it diverges from MySQL
(starting in earnest with 10.x), there will undoubtedly be subtle issues
that arise. So I'd say having MariaDB get tested along with Fedora will
actually improve those users' test coverage, which is a good thing.


I am favourable to the idea of switching to mariadb for fedora based
distros.

Currently the default mysql element seems to be switching [1], yet for
ubuntu/debian only, from the percona provided binary tarball of mysql to
the percona provided packaged version of mysql.

In theory we could further update it to use percona provided packages of
mysql on fedora too but I'm not sure there is much interest in using
that combination where people gets mariadb and galera from the official
repos.



IIRC fedora packages for percona xtradb cluster are not provided (unless 
something has changed recently).



Using different defaults (and even drop support for one or another,
depending on the distro), seems to me a better approach in the long term.

Are there contrary opinions?

1. https://review.openstack.org/#/c/90134




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-26 Thread Vladimir Kozhukalov
Making diagnostic snapshot for a particular environment is a good idea. But
the issue is still there.

We often have the situation when user actually doesn't care of old logs at
all. He downloads ISO, installs it and tries various installation options
(Ubuntu, Centos, HA, Ceph, etc.). Sooner or later his hard drive is full
and he even can not make the diagnostic snapshot. Dealing with that stuff
about taking care of available free space inside shotgun seems to be not a
good idea. But we still need to address this. The easiest way to do that is
to delete old log directories (logrotate or nailgun itself). In this case
the issue at least will be rather seldom. But, of course, the right way is
to have a kind of monitoring system on the master node and notify user when
disk is full or launch a kind of cleaning task.

Ok, right place where we should deal with that stuff about removing old
logs is logrotate. Currently it just moves files like this
/var/log/remote/old-node.example.com/some.log - /var/log/remote/
old-node.example.com/some.log.1.gz. But what it actually should do is to
remove the whole directories which are related to nonexistent nodes, right?





Vladimir Kozhukalov


On Tue, Jun 24, 2014 at 9:19 PM, Andrey Danin ada...@mirantis.com wrote:

 +1 to @Aleksandr


 On Tue, Jun 24, 2014 at 8:32 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 Yes, of course, snapshot for all nodes at once (like currently) should
 also be available.


 On Tue, Jun 24, 2014 at 7:27 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 Hello,

 @Aleks, it's a good idea to make snapshot per environment, but I think
 we can keep functionality to make snapshot for all nodes at once too.

 - Igor


 On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 Yeah, I thought about diagnostic snapshot too. Maybe it would be better
 to implement per-environment diagnostic snapshots? I.e. add diagnostic
 snapshot generate/download buttons/links in the environment actions tab.
 Such snapshot would contain info/logs about Fuel master node and nodes
 assigned to the environment only.


 On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky 
 ikalnit...@mirantis.com wrote:

 Hi guys,

 What about our diagnostic snapshot?

 I mean we're going to make snapshot of entire /var/log and obviously
 this old logs will be included in snapshot. Should we skip theem or
 such situation is ok?

 - Igor




 On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 Hi,

 If user runs some experiments with creating/deleting clusters, then
 taking care of old logs is under user's responsibility, I suppose. Fuel
 configures log rotation with compression for remote logs, so old logs 
 will
 be gzipped and will not take much space.

 In case of additional boolean parameter, the default value should be
 0-don't touch old logs.

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes
 clusters and old unused directories remain and take disk space. On the
 other hand, it is not so hard to imaging the situation when user would 
 like
 to be able to take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which
 will manage this piece of logic (1-remove old logs, 0-don't touch old 
 logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Fuel] Support for plugins in fuel client

2014-06-26 Thread Andrey Danin
Cool. I have no objections.
On Jun 25, 2014 9:27 AM, Dmitriy Shulyak dshul...@mirantis.com wrote:

 As i mentioned cliff uses similar approach, extending app by means of
 entry points, and written by same author.
 So i think stevedore will be used in cliff, or maybe already used in newer
 versions.
 But apart of stevedore-like dynamic extensions - cliff provides modular
 layers for cli app, it is kindof framework for wrtiting
 cli applications.


 On Tue, Jun 24, 2014 at 11:15 PM, Andrey Danin ada...@mirantis.com
 wrote:

 Why not to use stevedore?


 On Wed, Jun 18, 2014 at 1:42 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 Hi guys,

 Actually, I'm not a fun of cliff, but I think it's a good solution to
 use it in our fuel client.

 Here some pros:

 * pluggable design: we can encapsulate entire command logic in separate
 plugin file
 * builtin output formatters: we no need to implement various formatters
 to represent received data
 * interactive mode: cliff makes possible to provide a shell mode, just
 like psql do

 Well, I vote to use cliff inside fuel client. Yeah, I know, we need to
 rewrite a lot of code, but we
 can do it step-by-step.

 - Igor




 On Wed, Jun 18, 2014 at 9:14 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I am wondering what our story/vision for plugins in fuel client [1]?

 We can benefit from using cliff [2] as framework for fuel cli, apart
 from common code
 for building cli applications on top of argparse, it provides nice
 feature that allows to
 dynamicly add actions by means of entry points (stevedore-like).

 So we will be able to add new actions for fuel client simply by
 installing separate packages with correct entry points.

 Afaik stevedore is not used there, but i think it will be - cause of
 same author and maintainer.

 Do we need this? Maybe there is other options?

 Thanks

 [1] https://github.com/stackforge/fuel-web/tree/master/fuelclient
 [2]  https://github.com/openstack/cliff

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Nominations to Murano core

2014-06-26 Thread Ruslan Kamaldinov
I would like to nominate Serg Melikyan and Steve McLellan to Murano core.

Serge has been a significant reviewer in the Icehouse and Juno release cycles.

Steve has been providing consistent quality reviews and they continue
to get more frequent and better over time.


Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominations to Murano core

2014-06-26 Thread Alexander Tivelkov
+1 on both Serge and Steve

--
Regards,
Alexander Tivelkov


On Thu, Jun 26, 2014 at 1:37 PM, Ruslan Kamaldinov rkamaldi...@mirantis.com
 wrote:

 I would like to nominate Serg Melikyan and Steve McLellan to Murano core.

 Serge has been a significant reviewer in the Icehouse and Juno release
 cycles.

 Steve has been providing consistent quality reviews and they continue
 to get more frequent and better over time.


 Thanks,
 Ruslan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Mistral test infrastructure proposal

2014-06-26 Thread Renat Akhmerov
Anastasia,

Thanks a lot for this etherpad. I’ve read it and left my comments. Overall I 
agree with the suggested plan.

Additionally, I would suggest we picture the overall package structure with a 
sub-project breakdown. E.g.:

mistral:
functionaltests/
  openstack/
  standalone/

python-mistralclient:
...

etc.

Renat Akhmerov
@ Mirantis Inc.



On 24 Jun 2014, at 14:39, Anastasia Kuznetsova akuznets...@mirantis.com wrote:

 (reposting due to lack of subject)
 
 Hello, everyone!
 
 I am happy to announce that Mistral team started working on test 
 infrastructure. Due to this fact I prepared etherpad 
 https://etherpad.openstack.org/p/MistralTests where I analysed what we have 
 and what we need to do. 
 
 I would like to get your feedback to start creating appropriate blueprints 
 and implement them.
 
 Regards,
 Anastasia Kuznetsova
 QA Engineer at Mirantis
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Small questions re executor

2014-06-26 Thread Renat Akhmerov

On 25 Jun 2014, at 07:27, Dmitri Zimine dzim...@stackstorm.com wrote:

 * We must convey the action ERROR details back to the engine, and to the end 
 user. Log is not sufficient. How exactly? Via context? Via extra parameters 
 to convey_execution_results? Need a field in the model.
 https://github.com/stackforge/mistral/blob/master/mistral/engine/drivers/default/executor.py#L46-L59

This is a subject for upcoming refactoring. IMO task error should generally be 
a special case of result. Most likely, we’ll need to have a class Result 
encapsulating all needed information rather than just always thinking of result 
as of JSON.

 * What is the reason to update status on task failure in handle_task_error 
 via direct DB access, not via convey_task_results? 
 https://github.com/stackforge/mistral/blob/master/mistral/engine/drivers/default/executor.py#L61
  Bypassing convey_task_results can cause grief from missing TRACE statements 
 to more serious stuff… And looks like we are failing the whole execution 
 there? Just because one action had failed? Please clarify the intend here. 
 Note: details may all go away while doing Refine Engine - Executor protocol 
 blueprint 
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol,
  we just need to clarify the intent

That was an initial design (that didn’t take lots of things into account). I 
agree this is all bad. Particularly, we can’t fail the whole execution at once.

Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Virtual Interface creation failed

2014-06-26 Thread tfreger

On 06/26/2014 06:06 AM, Edgar Magana Perdomo (eperdomo) wrote:

The link does not work for me!

Edgar

From: tfre...@redhat.com mailto:tfre...@redhat.com 
tfre...@redhat.com mailto:tfre...@redhat.com

Organization: Red Hat
Reply-To: tfre...@redhat.com mailto:tfre...@redhat.com 
tfre...@redhat.com mailto:tfre...@redhat.com, OpenStack 
Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Wednesday, June 25, 2014 at 6:57 AM
To: openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Subject: [openstack-dev] Virtual Interface creation failed

Hello,

During the tests of Multiple RPC, I've encountered a problem to create VMs.
Creation of 180 VMs succeeded.

But when I've tried to create 200 VMs, part of the VMs failed with resources 
problem of VCPU limitation, the other part failed with following error:
vm failed -  {message: Virtual Interface creation failed, code: 500, created: 
2014-06-25T10:22:35Z} | | flavor | nano (10)

We can see from the Neutron server and Nova API logs, that Neutron got the Nova 
request and responded to it, but this connection fails somewhere between Nova 
API and Nova Compute.

Please see the exact logs:http://pastebin.test.redhat.com/217653


Tested with latest Icehouse version on RHEL 7.
Controller + Compute Node

All Nova and Neutron logs are attached.

Is this a known issue?
--
Thanks,
Toni



Please use this link, the first one was incorrect : 
http://paste.openstack.org/show/84954/


Thanks,
Toni

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-26 Thread Zang MingJie
On Wed, Jun 25, 2014 at 10:29 PM, McCann, Jack jack.mcc...@hp.com wrote:
  If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?

 It is technically possible to implement default SNAT at the compute node.

 One approach would be to use a single IP address per compute node as a
 default SNAT address shared by all VMs on that compute node.  While this
 optimizes for number of external IPs consumed per compute node, the downside
 is having VMs from different tenants sharing the same default SNAT IP address
 and conntrack table.  That downside may be acceptable for some deployments,
 but it is not acceptable in others.

To resolve the problem, we are using double-SNAT,

first, set up one namespace for each router, SNAT tenant ip ranges to
a separate range, say 169.254.255.0/24

then, SNAT from 169.254.255.0/24 to public network.

We are already using this method, and saved tons of ips in our
deployment, only one public ip is required per router agent


 Another approach would be to use a single IP address per router per compute
 node.  This avoids the multi-tenant issue mentioned above, at the cost of
 consuming more IP addresses, potentially one default SNAT IP address for each
 VM on the compute server (which is the case when every VM on the compute node
 is from a different tenant and/or using a different router).  At that point
 you might as well give each VM a floating IP.

 Hence the approach taken with the initial DVR implementation is to keep
 default SNAT as a centralized service.

 - Jack

 -Original Message-
 From: Zang MingJie [mailto:zealot0...@gmail.com]
 Sent: Wednesday, June 25, 2014 6:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut

 On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com 
 wrote:
  Hi,
  for each compute node to have SNAT to Internet, I think we have the
  drawbacks:
  1. SNAT is done in router, so each router will have to consume one public 
  IP
  on each compute node, which is money.

 SNAT can save more ips than wasted on floating ips

  2. for each compute node to go out to Internet, the compute node will have
  one more NIC, which connect to physical switch, which is money too
 

 Floating ip also need a public NIC on br-ex. Also we can use a
 separate vlan to handle the network, so this is not a problem

  So personally, I like the design:
   floating IPs and 1:N SNAT still use current network nodes, which will have
  HA solution enabled and we can have many l3 agents to host routers. but
  normal east/west traffic across compute nodes can use DVR.

 BTW, does HA implementation still active ? I haven't seen it has been
 touched for a while

 
  yong sheng gong
 
 
  On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  In current DVR design, SNAT is north/south direction, but packets have
  to go west/east through the network node. If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?
 
  SNAT versus floating ips, can save tons of public ips, in trade of
  introducing a single failure point, and limiting the bandwidth of the
  network node. If the SNAT performance problem can be solved, I'll
  encourage people to use SNAT over floating ips. unless the VM is
  serving a public service
 
  --
  Zang MingJie
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-26 Thread Andrey Danin
When nailgun remove a node it can gzip all the logs of this node to a
special file, like: /var/log/remote/archive/node-3-timestamp.tgz And
logrotate can keep these files for month, then delete them.

Master node health monitor is another big discussion.


On Thu, Jun 26, 2014 at 1:25 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Making diagnostic snapshot for a particular environment is a good idea.
 But the issue is still there.

 We often have the situation when user actually doesn't care of old logs at
 all. He downloads ISO, installs it and tries various installation options
 (Ubuntu, Centos, HA, Ceph, etc.). Sooner or later his hard drive is full
 and he even can not make the diagnostic snapshot. Dealing with that stuff
 about taking care of available free space inside shotgun seems to be not a
 good idea. But we still need to address this. The easiest way to do that is
 to delete old log directories (logrotate or nailgun itself). In this case
 the issue at least will be rather seldom. But, of course, the right way is
 to have a kind of monitoring system on the master node and notify user when
 disk is full or launch a kind of cleaning task.

 Ok, right place where we should deal with that stuff about removing old
 logs is logrotate. Currently it just moves files like this
 /var/log/remote/old-node.example.com/some.log - /var/log/remote/
 old-node.example.com/some.log.1.gz. But what it actually should do is to
 remove the whole directories which are related to nonexistent nodes, right?





 Vladimir Kozhukalov


 On Tue, Jun 24, 2014 at 9:19 PM, Andrey Danin ada...@mirantis.com wrote:

 +1 to @Aleksandr


 On Tue, Jun 24, 2014 at 8:32 PM, Aleksandr Didenko adide...@mirantis.com
  wrote:

 Yes, of course, snapshot for all nodes at once (like currently) should
 also be available.


 On Tue, Jun 24, 2014 at 7:27 PM, Igor Kalnitsky ikalnit...@mirantis.com
  wrote:

 Hello,

 @Aleks, it's a good idea to make snapshot per environment, but I think
 we can keep functionality to make snapshot for all nodes at once too.

 - Igor


 On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 Yeah, I thought about diagnostic snapshot too. Maybe it would be
 better to implement per-environment diagnostic snapshots? I.e. add
 diagnostic snapshot generate/download buttons/links in the environment
 actions tab. Such snapshot would contain info/logs about Fuel master node
 and nodes assigned to the environment only.


 On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky 
 ikalnit...@mirantis.com wrote:

 Hi guys,

 What about our diagnostic snapshot?

 I mean we're going to make snapshot of entire /var/log and obviously
 this old logs will be included in snapshot. Should we skip theem or
 such situation is ok?

 - Igor




 On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 Hi,

 If user runs some experiments with creating/deleting clusters, then
 taking care of old logs is under user's responsibility, I suppose. Fuel
 configures log rotation with compression for remote logs, so old logs 
 will
 be gzipped and will not take much space.

 In case of additional boolean parameter, the default value should be
 0-don't touch old logs.

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes
 clusters and old unused directories remain and take disk space. On the
 other hand, it is not so hard to imaging the situation when user would 
 like
 to be able to take a look in old logs.

 My suggestion here is to add a boolean parameter into settings
 which will manage this piece of logic (1-remove old logs, 0-don't 
 touch old
 logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-26 Thread Day, Phil
Why do others think – do we want a spec to add an additional task_state value 
that will be set in a well defined place.   Kind of feels overkill for me in 
terms of the review effort that would take compared to just reviewing the code 
- its not as there are going to be lots of alternatives to consider here.

From: wu jiang [mailto:win...@gmail.com]
Sent: 26 June 2014 09:19
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

 Hi Phil,

thanks for your reply. So should I need to submit a patch/spec to add it now?

On Wed, Jun 25, 2014 at 5:53 PM, Day, Phil 
philip@hp.commailto:philip@hp.com wrote:
Looking at this a bit deeper the comment in _start_buidling() says that its 
doing this to “Save the host and launched_on fields and log appropriately “.
But as far as I can see those don’t actually get set until the claim is made 
against the resource tracker a bit later in the process, so this whole update 
might just be not needed – although I still like the idea of a state to show 
that the request has been taken off the queue by the compute manager.

From: Day, Phil
Sent: 25 June 2014 10:35

To: OpenStack Development Mailing List
Subject: RE: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi WingWJ,

I agree that we shouldn’t have a task state of None while an operation is in 
progress.  I’m pretty sure back in the day this didn’t use to be the case and 
task_state stayed as Scheduling until it went to Networking  (now of course 
networking and BDM happen in parallel, so you have to be very quick to see the 
Networking state).

Personally I would like to see the extra granularity of knowing that a request 
has been started on the compute manager (and knowing that the request was 
started rather than is still sitting on the queue makes the decision to put it 
into an error state when the manager is re-started more robust).

Maybe a task state of “STARTING_BUILD” for this case ?

BTW I don’t think _start_building() is called anymore now that we’ve switched 
to conductor calling build_and_run_instance() – but the same task_state issue 
exist in there well.

From: wu jiang [mailto:win...@gmail.com]
Sent: 25 June 2014 08:19
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi all,

Recently, some of my instances were stuck in task_state 'None' during VM 
creation in my environment.

So I checked  found there's a 'None' task_state between 'SCHEDULING'  
'BLOCK_DEVICE_MAPPING'.

The related codes are implemented like this:

#def _start_building():
#self._instance_update(context, instance['uuid'],
#  vm_state=vm_states.BUILDING,
#  task_state=None,
#  expected_task_state=(task_states.SCHEDULING,
#   None))

So if compute node is rebooted after that procession, all building VMs on it 
will always stay in 'None' task_state. And it's useless and not convenient for 
locating problems.

Why not a new task_state for this step?


WingWJ

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener be set through separate API/model?

2014-06-26 Thread Evgeny Fedoruk
Hi guys,

Stephen, I understand your concerns regarding misleading names.
Here are my thoughts:  
default_tls_container_id
This name is the same for API and database model and I think this name 
explains its meaning well.
sni_container_ids(for API)  and listenersniassociations (for database table)
These two comes to specify the same thing - TLS container ids list for 
listener's SNI function,
Still there is a difference: in API it's just a list of IDs contained 
in listener's API call, 
while in database it becomes to specify association between listener ID 
and TLS container ID in separate database table.  
As Brandon posted, database table names in Neutron are derived from 
data model class names defining them.
Listenersniassociations table name is actually comes from 
ListenerSNIAssociation class that defines the table.
I understand there is no table for SNI object in neutron schema but I 
did not think of a better name for this association table name.
It could be named ListenerContainerAssociation but  this name does not 
clarify that this is for SNI and there is no Containers table in Neutron's 
schema neither)
Calling it ListenerSNIContainerAssociation may be too long..

These are my thoughts but I may miss something, please propose alternative 
names you think of

Thanks,
Evg 



-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Wednesday, June 25, 2014 11:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener 
be set through separate API/model?

Hi Stephen, 

The entityentityassociations table name is consistent with the rest of 
neutron's table names, as is not breaking the table name words up by an 
underscore.  I think this stems from the sqlalchemy models getting the table 
name for free because of inheriting from a base model that derives the table 
name based on the model's class name.

However, with markmcclain's blessing the new loadbalancing tables will be 
prefixed with lbaas_, but the model names will be LoadBalancer, Listener, etc.

I would agree though that since sni will not be a separate table then that will 
be a bit odd to have an association table's name implying a join of a table 
that doesn't exist.

Thanks,
Brandon

On Wed, 2014-06-25 at 09:55 -0700, Stephen Balukoff wrote:
 What's the point of putting off a potential name change to the actual 
 code (where you're going to see more friction because names in the 
 code do not match names in the spec, and this becomes a point where 
 confusion can happen). I understand the idea that code may not exactly 
 match the spec, but when it's obvious that it should, why use the 
 wrong name in the spec?
 
 
 Isn't it more confusing when the API does not match database object 
 names when it's clear the API is specifically meant to manipulate 
 those database objects?
 
 
 Is that naming convention actually documented anywhere? And why are 
 you calling it a 'listenersniassociations'? There is no SNI object 
 in the database. (IMO, this is a terrible name that needs to be 
 re-read three times just to pick out where the word breaks should be!
 As written it looks like Listeners NI Associations what the heck is 
 an 'NI'?)
 
 
 They say that there are two hard problems in Computer Science:
 * Cache invalidation
 * Naming things
 * Off-by-one errors
 
 
 And far be it from me to pick nits about a name (OK, I guess it's 
 isn't that far fetched for me to pick nits. :P ), but it's hard for me 
 to imagine a worse name than 'listenersniassocaitions' being 
 considered. :P
 
 
 Stephen
 
 
 
 
 On Wed, Jun 25, 2014 at 2:05 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 Hi folks
 
  
 
 Regarding names, there are two types of them: new API
 attributes for REST call,  and new column name and table name
 for the database.
 
 When creating listener, 2 new attributes will be added to the
 REST call API: 
 
 1.  default_tls_container_id - Barbican TLS container uuid
 
 2.  sni_container_ids (I removed the “_list” part to make
 it shorter) – ordered list of Barbican TLS container uuids
 
 For the database, these will be translated to:
 
 1.  default_tls_container_id- new column for listeners
 table
 
 2.  listenersniassociations (changed it from
 vipsniassociations which is a mistake) – new associations
 table, holding: id(generated), listener_id, TLS_container_id,
 and position(for ordering)
 
 This kind of a name comes to comply current neutron’s table
 name convention, like pollmonitorassociation or
 providerresourceassociation
 
  
 
 I think names may always be an issue for the actual code
   

Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-26 Thread Day, Phil
 -Original Message-
 From: Ahmed RAHAL [mailto:ara...@iweb.com]
 Sent: 25 June 2014 20:25
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] should we have a stale data indication in
 nova list/show?
 
 Le 2014-06-25 14:26, Day, Phil a écrit :
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 25 June 2014 11:49
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] should we have a stale data
  indication in nova list/show?
 
 
  +1 that the state shouldn't be changed.
 
  What about if we exposed the last updated time to users and allowed
 them to decide if its significant or not ?
 
 
 This would just indicate the last operation's time stamp.
 There already is a field in nova show called 'updated' that has some kind of
 indication. I honestly do not know who updates that field, but if anything,
 this existing field could/should be used.
 
 
Doh ! - yes that is the updated_at value in the DB.

I'd missed the last bit of my train of thought on this, which was that we could 
make the periodic task which checks (and corrects) the instance state update 
the updated_at timestamp even if the state is unchanged.

However that does add another DB update per instance every 60 seconds, and I'm 
with Joe that I'm really not convinced this is taking the Nova view of Status 
in the right direction.   Better to educate / document the limitation of status 
as they stand than to try and change it I think.

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] top gate bug is libvirt snapshot

2014-06-26 Thread Sean Dague
While the Trusty transition was mostly uneventful, it has exposed a
particular issue in libvirt, which is generating ~ 25% failure rate now
on most tempest jobs.

As can be seen here -
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297


... the libvirt live_snapshot code is something that our test pipeline
has never tested before, because it wasn't a new enough libvirt for us
to take that path.

Right now it's exploding, a lot -
https://bugs.launchpad.net/nova/+bug/1334398

Snapshotting gets used in Tempest to create images for testing, so image
setup tests are doing a decent number of snapshots. If I had to take a
completely *wild guess*, it's that libvirt can't do 2 live_snapshots at
the same time. It's probably something that most people haven't hit. The
wild guess is based on other libvirt issues we've hit that other people
haven't, and they are basically always a parallel ops triggered problem.

My 'stop the bleeding' suggested fix is this -
https://review.openstack.org/#/c/102643/ which just effectively disables
this code path for now. Then we can get some libvirt experts engaged to
help figure out the right long term fix.

I think there are a couple:

1) see if newer libvirt fixes this (1.2.5 just came out), and if so
mandate at some known working version. This would actually take a bunch
of work to be able to test a non packaged libvirt in our pipeline. We'd
need volunteers for that.

2) lock snapshot operations in nova-compute, so that we can only do 1 at
a time. Hopefully it's just 2 snapshot operations that is the issue, not
any other libvirt op during a snapshot, so serializing snapshot ops in
n-compute could put the kid gloves on libvirt and make it not break
here. This also needs some volunteers as we're going to be playing a
game of progressive serialization until we get to a point where it looks
like the failures go away.

3) Roll back to precise. I put this idea here for completeness, but I
think it's a terrible choice. This is one isolated, previously untested
(by us), code path. We can't stay on libvirt 0.9.6 forever, so actually
need to fix this for real (be it in nova's use of libvirt, or libvirt
itself).

There might be other options as well, ideas welcomed.

But for right now, we should stop the bleeding, so that nova/libvirt
isn't blocking everyone else from merging code.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Query on docstrings and method names

2014-06-26 Thread Deepak Shetty
Hi All,
With respect to the comment made by xian-yang @
https://review.openstack.org/#/c/102496/1/manila/share/drivers/glusterfs.py

for _update_share_status and the docstring that the method has, which is
Retrieve status info from share volume group.

I have few questions based on the above...

1) share volume group in docstring is incorrect, since its a glusterfs
driver. But I think i know why it says volume group, probably because it
came from lvm.py to begin with. I see that all other drivers also say
volume group, tho' it may not be the right thing to say for their
respective case.

Do we want to ensure that the docstrings are put in a way thats meaningful
to the driver ?

2) _update_share_status method - I see the same issue here.. it says the
same in all other drivers.. but as xian pointed, it should be rightfully
called _update_share_stats. So should we wait for all driver to follow suit
or start changing in the driver specific code as and when we touch that
part of code ?

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-06-26 Thread Daniel P. Berrange
On Thu, Jun 26, 2014 at 07:00:32AM -0400, Sean Dague wrote:
 While the Trusty transition was mostly uneventful, it has exposed a
 particular issue in libvirt, which is generating ~ 25% failure rate now
 on most tempest jobs.
 
 As can be seen here -
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297
 
 
 ... the libvirt live_snapshot code is something that our test pipeline
 has never tested before, because it wasn't a new enough libvirt for us
 to take that path.
 
 Right now it's exploding, a lot -
 https://bugs.launchpad.net/nova/+bug/1334398
 
 Snapshotting gets used in Tempest to create images for testing, so image
 setup tests are doing a decent number of snapshots. If I had to take a
 completely *wild guess*, it's that libvirt can't do 2 live_snapshots at
 the same time. It's probably something that most people haven't hit. The
 wild guess is based on other libvirt issues we've hit that other people
 haven't, and they are basically always a parallel ops triggered problem.
 
 My 'stop the bleeding' suggested fix is this -
 https://review.openstack.org/#/c/102643/ which just effectively disables
 this code path for now. Then we can get some libvirt experts engaged to
 help figure out the right long term fix.

Yes, this is a sensible pragmatic workaround for the short term until
we diagnose the root cause  fix it.

 I think there are a couple:
 
 1) see if newer libvirt fixes this (1.2.5 just came out), and if so
 mandate at some known working version. This would actually take a bunch
 of work to be able to test a non packaged libvirt in our pipeline. We'd
 need volunteers for that.
 
 2) lock snapshot operations in nova-compute, so that we can only do 1 at
 a time. Hopefully it's just 2 snapshot operations that is the issue, not
 any other libvirt op during a snapshot, so serializing snapshot ops in
 n-compute could put the kid gloves on libvirt and make it not break
 here. This also needs some volunteers as we're going to be playing a
 game of progressive serialization until we get to a point where it looks
 like the failures go away.
 
 3) Roll back to precise. I put this idea here for completeness, but I
 think it's a terrible choice. This is one isolated, previously untested
 (by us), code path. We can't stay on libvirt 0.9.6 forever, so actually
 need to fix this for real (be it in nova's use of libvirt, or libvirt
 itself).

Yep, since we *never* tested this code path in the gate before, rolling
back to precise would not even really be a fix for the problem. It would
merely mean we're not testing the code path again, which is really akin
to sticking our head in the sand.

 But for right now, we should stop the bleeding, so that nova/libvirt
 isn't blocking everyone else from merging code.

Agreed, we should merge the hack and treat the bug as release blocker
to be resolve prior to Juno GA.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] os-refresh-config run frequency

2014-06-26 Thread Macdonald-Wallace, Matthew
Hi all,

I've been working more and more with TripleO recently and whilst it does seem 
to solve a number of problems well, I have found a couple of idiosyncrasies 
that I feel would be easy to address.

My primary concern lies in the fact that os-refresh-config does not run on 
every boot/reboot of a system.  Surely a reboot *is* a configuration change and 
therefore we should ensure that the box has come up in the expected state with 
the correct config?

This is easily fixed through the addition of an @reboot entry in /etc/crontab 
to run o-r-c or (less easily) by re-designing o-r-c to run as a service.

My secondary concern is that through not running os-refresh-config on a regular 
basis by default (i.e. every 15 minutes or something in the same style as 
chef/cfengine/puppet), we leave ourselves exposed to someone trying to make a 
quick fix to a production node and taking that node offline the next time it 
reboots because the config was still left as broken owing to a lack of updates 
to HEAT (I'm thinking a quick change to allow root access via SSH during a 
major incident that is then left unchanged for months because no-one updated 
HEAT).

There are a number of options to fix this including Modifying os-collect-config 
to auto-run os-refresh-config on a regular basis or setting os-refresh-config 
to be its own service running via upstart or similar that triggers every 15 
minutes

I'm sure there are other solutions to these problems, however I know from 
experience that claiming this is solved through education of users or (more 
severely!) via HR is not a sensible approach to take as by the time you realise 
that your configuration has been changed for the last 24 hours it's often too 
late!

I'd welcome thoughts on the above,

Kind regards,

Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spec Review Day Today!

2014-06-26 Thread John Garbutt
Thanks all, I think we made a real dent in the review queue yesterday.

On 25 June 2014 22:58, Russell Bryant rbry...@redhat.com wrote:
 The majority of specs are waiting on an update from the submitter.  I
 didn't grab these stats before today, but I believe we made some good
 progress.
 Using: $ openreviews -u russellb -p projects/nova-specs.json

 Projects: [u'nova-specs']
 -- Total Open Reviews: 111
 -- Waiting on Submitter: 87
 -- Waiting on Reviewer: 24

A massive improvement, thank you!

 The wait times still aren't amazing though.  There's a pretty big
 increase in wait time as you look at older ones.  It seems there's a set
 we've been all avoiding for one reason or another that deserve some sort
 of answer, even if it's we're just not interested enough.
 -- Stats since the latest revision:
  Average wait time: 12 days, 11 hours, 30 minutes
 -- Stats since the last revision without -1 or -2 :
  Average wait time: 13 days, 1 hours, 8 minutes
 -- Stats since the first revision (total age):
  Average wait time: 42 days, 18 hours, 44 minutes

+1

I spent a lot of yesterday abandoning spec reviews that were waiting
on the authors to respond for other a month or so. I will keep doing
this now and then.

In that process I certainly found quite a few reviews at the end of
the queue that had been forgotten. I just didn't seem them in a big
sea of red at the end of the queue.

I will personally try and watch these stats more closely, and spot
people who get forgotten!

 Another stat we can look at is how many specs we merged:

 https://review.openstack.org/#/q/project:openstack/nova-specs+status:merged,n,z

 It looks like we've merged 11 specs in the last 24 hours or so, and only
 about 5 more if we look at the last week.

A few extra ones this morning too, not many, but we are really chewing
through more than we have before.

Big thank you to Russell for reminding me about that stats, we should
be more organised next time, and take the before readings!

Many thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominations to Murano core

2014-06-26 Thread Stan Lagun
+1 for both (or should I say +2?)

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com


On Thu, Jun 26, 2014 at 1:49 PM, Alexander Tivelkov ativel...@mirantis.com
wrote:

 +1 on both Serge and Steve

 --
 Regards,
 Alexander Tivelkov


 On Thu, Jun 26, 2014 at 1:37 PM, Ruslan Kamaldinov 
 rkamaldi...@mirantis.com wrote:

 I would like to nominate Serg Melikyan and Steve McLellan to Murano core.

 Serge has been a significant reviewer in the Icehouse and Juno release
 cycles.

 Steve has been providing consistent quality reviews and they continue
 to get more frequent and better over time.


 Thanks,
 Ruslan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-26 Thread CARVER, PAUL






 Original message 
From: Yi Sun beyo...@gmail.com
Date:
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut




Yi wrote:
+1, I had another email to discuss about FW (FWaaS) and DVR integration. 
Traditionally, we run firewall with router so that firewall can use route and 
NAT info from router. since DVR is asymmetric when handling traffic, it is hard 
to run stateful firewall on top of DVR just like a traditional firewall does . 
When the NAT is in the picture, the situation can be even worse.
Yi


Don't forget logging either. In any security concious environment , 
particularly any place with legal/regulatory/contractual audit requirements a 
firewall that doesn't keep full logs of all dropped and passed sessions is 
worthless.

Stateless packet dropping doesn't help at all when conducting forensics on an 
attack that is already known to have occured.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] volume creation faild.

2014-06-26 Thread Yogesh Prasad
Hi All,

I have a devstack setup , and i am trying to create a volume but it is
creating with error status.
Can any one tell me what is the problem?

Screen logs --

.py:297
2014-06-26 17:37:04.370 DEBUG keystone.notifications [-] CADF Event:
{'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator':
{'typeURI': 'service/security/account/user', 'host': {'agent':
'python-keystoneclient', 'address': '20.10.22.245'}, 'id':
'openstack:d58d5688-f604-4362-9069-8cb217c029c8', 'name':
u'6fcd84d16da646dc825411da06bf26b2'}, 'target': {'typeURI':
'service/security/account/user', 'id':
'openstack:85ef43dd-b0ab-4726-898e-36107b06a231'}, 'observer': {'typeURI':
'service/security', 'id':
'openstack:120866e8-51b9-4338-b41b-2dbea3aa4f17'}, 'eventType': 'activity',
'eventTime': '2014-06-26T12:07:04.368547+', 'action': 'authenticate',
'outcome': 'success', 'id':
'openstack:dda01da7-1274-4b4f-8ff5-1dcdb6d80ff4'} from (pid=7033)
_send_audit_notification /opt/stack/keystone/keystone/notifications.py:297
2014-06-26 17:37:04.902 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
[26/Jun/2014 17:37:04] POST /v2.0//tokens HTTP/1.1 200 6913 0.771471
2014-06-26 17:37:04.992 DEBUG keystone.middleware.core [-] RBAC:
auth_context: {'is_delegated_auth': False, 'user_id':
u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
'trustee_id': None, 'trustor_id': None, 'project_id':
u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
process_request /opt/stack/keystone/keystone/middleware/core.py:286
2014-06-26 17:37:05.009 DEBUG keystone.common.wsgi [-] arg_dict: {} from
(pid=7033) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181
2014-06-26 17:37:05.023 DEBUG keystone.common.controller [-] RBAC:
Authorizing identity:revocation_list() from (pid=7033)
_build_policy_check_credentials
/opt/stack/keystone/keystone/common/controller.py:54
2014-06-26 17:37:05.027 DEBUG keystone.common.controller [-] RBAC: using
auth context from the request environment from (pid=7033)
_build_policy_check_credentials
/opt/stack/keystone/keystone/common/controller.py:59
2014-06-26 17:37:05.033 DEBUG keystone.policy.backends.rules [-] enforce
identity:revocation_list: {'is_delegated_auth': False, 'user_id':
u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
'trustee_id': None, 'trustor_id': None, 'project_id':
u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101
2014-06-26 17:37:05.040 DEBUG keystone.openstack.common.policy [-] Rule
identity:revocation_list will be now enforced from (pid=7033) enforce
/opt/stack/keystone/keystone/openstack/common/policy.py:288
2014-06-26 17:37:05.043 DEBUG keystone.common.controller [-] RBAC:
Authorization granted from (pid=7033) inner
/opt/stack/keystone/keystone/common/controller.py:151
2014-06-26 17:37:05.228 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
[26/Jun/2014 17:37:05] GET /v2.0/tokens/revoked HTTP/1.1 200 815 0.277525

-- 
*Thanks  Regards*,
  Yogesh Prasad.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Two questions about 'backup' API

2014-06-26 Thread wu jiang
Hi all,

I tested the 'backup' API recently and got two questions about it:

1. Why 'daily'  'weekly' appear in code comments  novaclient about
'backup_type' parameter?

The 'backup_type' parameter is only a tag for this backup(image).
And there isn't corresponding validation for 'backup_type' about these two
types.

Moreover, there is also no periodic_task for 'backup' in compute host.
(It's fair to leave the choice to other third-parts system)

So, why we leave 'daily | weekly' example in code comments and novaclient?
IMO it may lead confusion that Nova will do more actions for 'daily|weekly'
backup request.

2. Is it necessary to backup instance when 'rotation' is equal to 0?

Let's look at related codes in nova/compute/manager.py:
# def backup_instance(self, context, image_id, instance, backup_type,
rotation):
#
#self._do_snapshot_instance(context, image_id, instance, rotation)
#self._rotate_backups(context, instance, backup_type, rotation)

I knew Nova will delete all backup images according the 'backup_type'
parameter when 'rotation' equals to 0.

But according the logic above, Nova will generate one new backup in
_do_snapshot_instance(), and delete it in _rotate_backups()..

It's weird to snapshot a useless backup firstly IMO.
We need to add one new branch here: if 'rotation' is equal to 0, no need to
backup, just rotate it.


So, what's your opinions? Look forward to your suggestion.
Thanks.

WingWJ
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume creation faild.

2014-06-26 Thread Duncan Thomas
I'm afraid that isn't the log we need to diagnose your problem. Can
you put cinder-api, cinder-scheduler and cinder-volume logs up please?

On 26 June 2014 13:12, Yogesh Prasad yogesh.pra...@cloudbyte.com wrote:
 Hi All,

 I have a devstack setup , and i am trying to create a volume but it is
 creating with error status.
 Can any one tell me what is the problem?

 Screen logs --

 .py:297
 2014-06-26 17:37:04.370 DEBUG keystone.notifications [-] CADF Event:
 {'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator':
 {'typeURI': 'service/security/account/user', 'host': {'agent':
 'python-keystoneclient', 'address': '20.10.22.245'}, 'id':
 'openstack:d58d5688-f604-4362-9069-8cb217c029c8', 'name':
 u'6fcd84d16da646dc825411da06bf26b2'}, 'target': {'typeURI':
 'service/security/account/user', 'id':
 'openstack:85ef43dd-b0ab-4726-898e-36107b06a231'}, 'observer': {'typeURI':
 'service/security', 'id': 'openstack:120866e8-51b9-4338-b41b-2dbea3aa4f17'},
 'eventType': 'activity', 'eventTime': '2014-06-26T12:07:04.368547+',
 'action': 'authenticate', 'outcome': 'success', 'id':
 'openstack:dda01da7-1274-4b4f-8ff5-1dcdb6d80ff4'} from (pid=7033)
 _send_audit_notification /opt/stack/keystone/keystone/notifications.py:297
 2014-06-26 17:37:04.902 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
 [26/Jun/2014 17:37:04] POST /v2.0//tokens HTTP/1.1 200 6913 0.771471
 2014-06-26 17:37:04.992 DEBUG keystone.middleware.core [-] RBAC:
 auth_context: {'is_delegated_auth': False, 'user_id':
 u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
 'trustee_id': None, 'trustor_id': None, 'project_id':
 u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
 process_request /opt/stack/keystone/keystone/middleware/core.py:286
 2014-06-26 17:37:05.009 DEBUG keystone.common.wsgi [-] arg_dict: {} from
 (pid=7033) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181
 2014-06-26 17:37:05.023 DEBUG keystone.common.controller [-] RBAC:
 Authorizing identity:revocation_list() from (pid=7033)
 _build_policy_check_credentials
 /opt/stack/keystone/keystone/common/controller.py:54
 2014-06-26 17:37:05.027 DEBUG keystone.common.controller [-] RBAC: using
 auth context from the request environment from (pid=7033)
 _build_policy_check_credentials
 /opt/stack/keystone/keystone/common/controller.py:59
 2014-06-26 17:37:05.033 DEBUG keystone.policy.backends.rules [-] enforce
 identity:revocation_list: {'is_delegated_auth': False, 'user_id':
 u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
 'trustee_id': None, 'trustor_id': None, 'project_id':
 u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
 enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101
 2014-06-26 17:37:05.040 DEBUG keystone.openstack.common.policy [-] Rule
 identity:revocation_list will be now enforced from (pid=7033) enforce
 /opt/stack/keystone/keystone/openstack/common/policy.py:288
 2014-06-26 17:37:05.043 DEBUG keystone.common.controller [-] RBAC:
 Authorization granted from (pid=7033) inner
 /opt/stack/keystone/keystone/common/controller.py:151
 2014-06-26 17:37:05.228 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
 [26/Jun/2014 17:37:05] GET /v2.0/tokens/revoked HTTP/1.1 200 815 0.277525

 --
 Thanks  Regards,
   Yogesh Prasad.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-26 Thread wu jiang
Hi Phil,

Ok, I'll submit a patch to add a new task_state(like 'STARTING_BUILD') in
these two days.
And related modifications will be definitely added in the Doc.

Thanks for your help. :)

WingWJ


On Thu, Jun 26, 2014 at 6:42 PM, Day, Phil philip@hp.com wrote:

  Why do others think – do we want a spec to add an additional task_state
 value that will be set in a well defined place.   Kind of feels overkill
 for me in terms of the review effort that would take compared to just
 reviewing the code - its not as there are going to be lots of alternatives
 to consider here.



 *From:* wu jiang [mailto:win...@gmail.com]
 *Sent:* 26 June 2014 09:19
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] Why is there a 'None' task_state
 between 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?



  Hi Phil,



 thanks for your reply. So should I need to submit a patch/spec to add it
 now?



 On Wed, Jun 25, 2014 at 5:53 PM, Day, Phil philip@hp.com wrote:

  Looking at this a bit deeper the comment in _*start*_buidling() says
 that its doing this to “Save the host and launched_on fields and log
 appropriately “.But as far as I can see those don’t actually get set
 until the claim is made against the resource tracker a bit later in the
 process, so this whole update might just be not needed – although I still
 like the idea of a state to show that the request has been taken off the
 queue by the compute manager.



 *From:* Day, Phil
 *Sent:* 25 June 2014 10:35


 *To:* OpenStack Development Mailing List

 *Subject:* RE: [openstack-dev] [nova] Why is there a 'None' task_state
 between 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?



 Hi WingWJ,



 I agree that we shouldn’t have a task state of None while an operation is
 in progress.  I’m pretty sure back in the day this didn’t use to be the
 case and task_state stayed as Scheduling until it went to Networking  (now
 of course networking and BDM happen in parallel, so you have to be very
 quick to see the Networking state).



 Personally I would like to see the extra granularity of knowing that a
 request has been started on the compute manager (and knowing that the
 request was started rather than is still sitting on the queue makes the
 decision to put it into an error state when the manager is re-started more
 robust).



 Maybe a task state of “STARTING_BUILD” for this case ?



 BTW I don’t think _*start*_building() is called anymore now that we’ve
 switched to conductor calling build_and_run_instance() – but the same
 task_state issue exist in there well.



 *From:* wu jiang [mailto:win...@gmail.com win...@gmail.com]

 *Sent:* 25 June 2014 08:19
 *To:* OpenStack Development Mailing List

 *Subject:* [openstack-dev] [nova] Why is there a 'None' task_state
 between 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?



 Hi all,



 Recently, some of my instances were stuck in task_state 'None' during VM
 creation in my environment.



 So I checked  found there's a 'None' task_state between 'SCHEDULING' 
 'BLOCK_DEVICE_MAPPING'.



 The related codes are implemented like this:



 #def _start_building():

 #self._instance_update(context, instance['uuid'],

 #  vm_state=vm_states.BUILDING,

 #  task_state=None,

 #  expected_task_state=(task_states.SCHEDULING,

 #   None))



 So if compute node is rebooted after that procession, all building VMs on
 it will always stay in 'None' task_state. And it's useless and not
 convenient for locating problems.



 Why not a new task_state for this step?





 WingWJ


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Trove Juno mid-cycle meeting - confirmed dates and location

2014-06-26 Thread Amrith Kumar
Hi everyone,

 

Just a heads up that the date and location for the Trove Juno mid-cycle
meetup has been finalized. 

 

Where: MIT, Cambridge, MA

When: August 20, 21 and 22

 

If you are interested in attending the mid-cycle meetup, please mark your
calendars and register at

 

https://trove-midcycle-meetup-2014.eventbrite.com

 

There is also an etherpad for this event, if you would like to suggest
topics for discussion please update this etherpad

 

https://etherpad.openstack.org/p/Trove_Juno_Mid-Cycle_Meetup

 

Also, if you are interested in Trove, I'll put in a shameless plug for an
all-day event on Trove and Database as a Service with OpenStack that we
(Tesora, the company I work for) are organizing on August 19th. 

 

http://tesora.com/event/openstack-trove-day

 

Thanks,

 

-amrith

 

--

 

Amrith Kumar, CTO Tesora ( http://www.tesora.com www.tesora.com)

 

Twitter: @amrithkumar  

IRC: amrith @freenode 

 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume creation failed.

2014-06-26 Thread Yogesh Prasad
Hi,

I have a devstack setup.
Please tell me, how i can create separate log file for each type of logs.
like cinder-api, cinder-scheduler and cinder-volume logs.


On Thu, Jun 26, 2014 at 5:49 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 I'm afraid that isn't the log we need to diagnose your problem. Can
 you put cinder-api, cinder-scheduler and cinder-volume logs up please?

 On 26 June 2014 13:12, Yogesh Prasad yogesh.pra...@cloudbyte.com wrote:
  Hi All,
 
  I have a devstack setup , and i am trying to create a volume but it is
  creating with error status.
  Can any one tell me what is the problem?
 
  Screen logs --
 
  .py:297
  2014-06-26 17:37:04.370 DEBUG keystone.notifications [-] CADF Event:
  {'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event',
 'initiator':
  {'typeURI': 'service/security/account/user', 'host': {'agent':
  'python-keystoneclient', 'address': '20.10.22.245'}, 'id':
  'openstack:d58d5688-f604-4362-9069-8cb217c029c8', 'name':
  u'6fcd84d16da646dc825411da06bf26b2'}, 'target': {'typeURI':
  'service/security/account/user', 'id':
  'openstack:85ef43dd-b0ab-4726-898e-36107b06a231'}, 'observer':
 {'typeURI':
  'service/security', 'id':
 'openstack:120866e8-51b9-4338-b41b-2dbea3aa4f17'},
  'eventType': 'activity', 'eventTime': '2014-06-26T12:07:04.368547+',
  'action': 'authenticate', 'outcome': 'success', 'id':
  'openstack:dda01da7-1274-4b4f-8ff5-1dcdb6d80ff4'} from (pid=7033)
  _send_audit_notification
 /opt/stack/keystone/keystone/notifications.py:297
  2014-06-26 17:37:04.902 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
  [26/Jun/2014 17:37:04] POST /v2.0//tokens HTTP/1.1 200 6913 0.771471
  2014-06-26 17:37:04.992 DEBUG keystone.middleware.core [-] RBAC:
  auth_context: {'is_delegated_auth': False, 'user_id':
  u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
  'trustee_id': None, 'trustor_id': None, 'project_id':
  u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
  process_request /opt/stack/keystone/keystone/middleware/core.py:286
  2014-06-26 17:37:05.009 DEBUG keystone.common.wsgi [-] arg_dict: {} from
  (pid=7033) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181
  2014-06-26 17:37:05.023 DEBUG keystone.common.controller [-] RBAC:
  Authorizing identity:revocation_list() from (pid=7033)
  _build_policy_check_credentials
  /opt/stack/keystone/keystone/common/controller.py:54
  2014-06-26 17:37:05.027 DEBUG keystone.common.controller [-] RBAC: using
  auth context from the request environment from (pid=7033)
  _build_policy_check_credentials
  /opt/stack/keystone/keystone/common/controller.py:59
  2014-06-26 17:37:05.033 DEBUG keystone.policy.backends.rules [-] enforce
  identity:revocation_list: {'is_delegated_auth': False, 'user_id':
  u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
  'trustee_id': None, 'trustor_id': None, 'project_id':
  u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
  enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101
  2014-06-26 17:37:05.040 DEBUG keystone.openstack.common.policy [-] Rule
  identity:revocation_list will be now enforced from (pid=7033) enforce
  /opt/stack/keystone/keystone/openstack/common/policy.py:288
  2014-06-26 17:37:05.043 DEBUG keystone.common.controller [-] RBAC:
  Authorization granted from (pid=7033) inner
  /opt/stack/keystone/keystone/common/controller.py:151
  2014-06-26 17:37:05.228 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
  [26/Jun/2014 17:37:05] GET /v2.0/tokens/revoked HTTP/1.1 200 815
 0.277525
 
  --
  Thanks  Regards,
Yogesh Prasad.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Thanks  Regards*,
  Yogesh Prasad.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Meeting summary 2014-06-26

2014-06-26 Thread Steve Gordon
- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, June 26, 2014 8:54:08 AM
 Subject: [NFV] Meeting summary 2014-06-26
 

Of course I meant the 25th... :/

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-26 Thread Sandy Walsh
Nice ... that's always bugged me.


From: wu jiang [win...@gmail.com]
Sent: Thursday, June 26, 2014 9:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi Phil,

Ok, I'll submit a patch to add a new task_state(like 'STARTING_BUILD') in these 
two days.
And related modifications will be definitely added in the Doc.

Thanks for your help. :)

WingWJ


On Thu, Jun 26, 2014 at 6:42 PM, Day, Phil 
philip@hp.commailto:philip@hp.com wrote:
Why do others think – do we want a spec to add an additional task_state value 
that will be set in a well defined place.   Kind of feels overkill for me in 
terms of the review effort that would take compared to just reviewing the code 
- its not as there are going to be lots of alternatives to consider here.

From: wu jiang [mailto:win...@gmail.commailto:win...@gmail.com]
Sent: 26 June 2014 09:19
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

 Hi Phil,

thanks for your reply. So should I need to submit a patch/spec to add it now?

On Wed, Jun 25, 2014 at 5:53 PM, Day, Phil 
philip@hp.commailto:philip@hp.com wrote:
Looking at this a bit deeper the comment in _start_buidling() says that its 
doing this to “Save the host and launched_on fields and log appropriately “.
But as far as I can see those don’t actually get set until the claim is made 
against the resource tracker a bit later in the process, so this whole update 
might just be not needed – although I still like the idea of a state to show 
that the request has been taken off the queue by the compute manager.

From: Day, Phil
Sent: 25 June 2014 10:35

To: OpenStack Development Mailing List
Subject: RE: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi WingWJ,

I agree that we shouldn’t have a task state of None while an operation is in 
progress.  I’m pretty sure back in the day this didn’t use to be the case and 
task_state stayed as Scheduling until it went to Networking  (now of course 
networking and BDM happen in parallel, so you have to be very quick to see the 
Networking state).

Personally I would like to see the extra granularity of knowing that a request 
has been started on the compute manager (and knowing that the request was 
started rather than is still sitting on the queue makes the decision to put it 
into an error state when the manager is re-started more robust).

Maybe a task state of “STARTING_BUILD” for this case ?

BTW I don’t think _start_building() is called anymore now that we’ve switched 
to conductor calling build_and_run_instance() – but the same task_state issue 
exist in there well.

From: wu jiang [mailto:win...@gmail.com]
Sent: 25 June 2014 08:19
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi all,

Recently, some of my instances were stuck in task_state 'None' during VM 
creation in my environment.

So I checked  found there's a 'None' task_state between 'SCHEDULING'  
'BLOCK_DEVICE_MAPPING'.

The related codes are implemented like this:

#def _start_building():
#self._instance_update(context, instance['uuid'],
#  vm_state=vm_states.BUILDING,
#  task_state=None,
#  expected_task_state=(task_states.SCHEDULING,
#   None))

So if compute node is rebooted after that procession, all building VMs on it 
will always stay in 'None' task_state. And it's useless and not convenient for 
locating problems.

Why not a new task_state for this step?


WingWJ

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][volume/manager.py] volume driver mapping

2014-06-26 Thread Duncan Thomas
On 26 June 2014 05:46, Amit Das amit@cloudbyte.com wrote:
 This seems cool.

 Does it mean the storage vendors write their new drivers  just map it from
 cinder.conf ?

Correct. You can cause devstack to set up cinder.conf for you by
setting CINDER_DRIVER=cinder.volume.drivers.foo.bar in local.conf
before you start devstack, or you can patch it up by hand, doesn't
make much difference.

 Does it involve any changes to devstack as well ?

Nope :-) Otehr than deploying your code somewhere so it can be loaded.
I'm not sure if there's any magic in devstack to apply a patch
automatically, if not then it might be a nice addition...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominations to Murano core

2014-06-26 Thread Dmitry Teselkin
+1 for both


On Thu, Jun 26, 2014 at 3:29 PM, Stan Lagun sla...@mirantis.com wrote:

 +1 for both (or should I say +2?)

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis

  sla...@mirantis.com


 On Thu, Jun 26, 2014 at 1:49 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 +1 on both Serge and Steve

 --
 Regards,
 Alexander Tivelkov


 On Thu, Jun 26, 2014 at 1:37 PM, Ruslan Kamaldinov 
 rkamaldi...@mirantis.com wrote:

 I would like to nominate Serg Melikyan and Steve McLellan to Murano core.

 Serge has been a significant reviewer in the Icehouse and Juno release
 cycles.

 Steve has been providing consistent quality reviews and they continue
 to get more frequent and better over time.


 Thanks,
 Ruslan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominations to Murano core

2014-06-26 Thread Timur Nurlygayanov
+1,

they make really important work!

Serg and Steve, congratulations!


On Thu, Jun 26, 2014 at 3:29 PM, Stan Lagun sla...@mirantis.com wrote:

 +1 for both (or should I say +2?)

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis

  sla...@mirantis.com


 On Thu, Jun 26, 2014 at 1:49 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 +1 on both Serge and Steve

 --
 Regards,
 Alexander Tivelkov


 On Thu, Jun 26, 2014 at 1:37 PM, Ruslan Kamaldinov 
 rkamaldi...@mirantis.com wrote:

 I would like to nominate Serg Melikyan and Steve McLellan to Murano core.

 Serge has been a significant reviewer in the Icehouse and Juno release
 cycles.

 Steve has been providing consistent quality reviews and they continue
 to get more frequent and better over time.


 Thanks,
 Ruslan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominations to Murano core

2014-06-26 Thread Timur Sufiev
+1 for both.

On Thu, Jun 26, 2014 at 3:29 PM, Stan Lagun sla...@mirantis.com wrote:
 +1 for both (or should I say +2?)

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis



 On Thu, Jun 26, 2014 at 1:49 PM, Alexander Tivelkov ativel...@mirantis.com
 wrote:

 +1 on both Serge and Steve

 --
 Regards,
 Alexander Tivelkov


 On Thu, Jun 26, 2014 at 1:37 PM, Ruslan Kamaldinov
 rkamaldi...@mirantis.com wrote:

 I would like to nominate Serg Melikyan and Steve McLellan to Murano core.

 Serge has been a significant reviewer in the Icehouse and Juno release
 cycles.

 Steve has been providing consistent quality reviews and they continue
 to get more frequent and better over time.


 Thanks,
 Ruslan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [Heat] Ceilometer aware people, please advise us on processing notifications..

2014-06-26 Thread Thomas Herve
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 24/06/14 09:28, Clint Byrum wrote:
  Hello! I would like to turn your attention to this specification draft
  that I've written:
  
  https://review.openstack.org/#/c/100012/1/specs/convergence-continuous-observer.rst
  
  Angus has suggested that perhaps Ceilometer is a better place to handle
  this. Can you please comment on the review, or can we have a brief
  mailing list discussion about how best to filter notifications?
  
  Basically in Heat when a user boots an instance, we would like to act as
  soon as it is active, and not have to poll the nova API to know when
  that is. Angus has suggested that perhaps we can just tell ceilometer to
  hit Heat with a web hook when that happens.
 
 We could have a per-resource webhook (either the signal or something like it)
 that is just a logical notification/kick to go and re-check the resource
 state.
 
 The other part of this is when we turn on continuous convergence, we could
 get an alarm whenever that resource changes state (or what ever we are
 interested
 in - as long as it is in the notification payload).
 
 Given the number of resources we want to manage the alarm sub-system will
 need to
 be scalable. I'd rather Ceilometer solve that than Heat.

When we talked about that issue in Atlanta, I think we came to the conclusion 
that one system wouldn't solve it, and that we need to be able to provide 
different pluggable solutions.

The first solution is just to move the current polling system and create a 
generic API around it. That's something that we'll want to keep, even if it's 
only to make standalone mode works. The next solution for me is to subscribe 
directly to the notification system. We know the shortcomings, but it's the 
obvious improvement we can do in the Juno cycle.

Later on, if/when Ceilometer provides what we need, we can implement a new 
backend using it.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][novnc] Browser Usage for noVNC

2014-06-26 Thread Solly Ross
It was recommended I cross-post this for visibility.
Devs are welcome to provide feedback as well ;-)

Best Regards,
Solly Ross

- Forwarded Message -
From: Solly Ross sr...@redhat.com
To: openstack-operat...@lists.openstack.org
Sent: Tuesday, June 24, 2014 1:29:15 PM
Subject: Browser Usage for noVNC

Hello Operators,

I'm part of the noVNC upstream development team.  noVNC, for those of you who 
don't know, is the HTML5 VNC client
integrated into Horizon (the OpenStack dashboard).  We are considering removing 
support for some older browsers, and
wanted to make sure that we wouldn't be inconveniencing anybody too much.  Are 
there any operators who still aim to
support connecting with any of the following browsers:

- Firefox  11.0
- Chrome  16.0
- IE  10.0
- Safari  6.0
(insert uncommon browser here that doesn't support WebSockets natively)

If so, what is your minimum browser version?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][novnc] Browser Usage for noVNC

2014-06-26 Thread Anne Gentle
On Thu, Jun 26, 2014 at 9:54 AM, Solly Ross sr...@redhat.com wrote:

 It was recommended I cross-post this for visibility.
 Devs are welcome to provide feedback as well ;-)

 Best Regards,
 Solly Ross

 - Forwarded Message -
 From: Solly Ross sr...@redhat.com
 To: openstack-operat...@lists.openstack.org
 Sent: Tuesday, June 24, 2014 1:29:15 PM
 Subject: Browser Usage for noVNC

 Hello Operators,

 I'm part of the noVNC upstream development team.  noVNC, for those of you
 who don't know, is the HTML5 VNC client
 integrated into Horizon (the OpenStack dashboard).  We are considering
 removing support for some older browsers, and
 wanted to make sure that we wouldn't be inconveniencing anybody too much.
  Are there any operators who still aim to
 support connecting with any of the following browsers:


Hi Solly,
I don't know if this data helps, but I took a quick look at the web
analytics for people reading docs.openstack.org, and all of your listed
browser versions are under 2% (most under 1%) for web site readers.

Well, except IE 7 hangs in there at 4.43%. :)

Let me know if that's useful.
Anne



 - Firefox  11.0
 - Chrome  16.0
 - IE  10.0
 - Safari  6.0
 (insert uncommon browser here that doesn't support WebSockets natively)

 If so, what is your minimum browser version?

 Best Regards,
 Solly Ross

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Decoupling backend drivers

2014-06-26 Thread Kurt Griffiths
Crew, I’d like to propose the following:

  1.  Decouple pool management from data storage (two separate drivers)
  2.  Keep pool management driver for sqla, but drop the sqla data storage 
driver
  3.  Provide a non-AGPL alternative to MongoDB that has feature parity and is 
at least as performant

Decoupling will make configuration less confusing, while allowing us to 
maintain drivers separately and give us the flexibility to choose the best tool 
for the job (BTFJ). Once that work is done, we can drop support for sqla  as a 
message store backend, since it isn’t a viable non-AGPL alternative to MongoDB. 
Instead, we can look into some other backends that offer a good mix of 
durability and performance.

What does everything think about this strategy?

--
Kurt Griffiths (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Creating new python-new_project_nameclient

2014-06-26 Thread Aaron Rosen
Thanks guys, very helpful.

Aaron


On Wed, Jun 25, 2014 at 11:53 PM, Jamie Lennox jamielen...@redhat.com
wrote:

 On Wed, 2014-06-25 at 22:42 -0500, Dean Troyer wrote:
  On Wed, Jun 25, 2014 at 10:18 PM, Aaron Rosen aaronoro...@gmail.com
  wrote:
  I'm looking at creating a new python-new_project_nameclient
  and I was wondering if there was any on going effort to share
  code between the clients or not? I've looked at the code in
  python-novaclient and python-neutronclient and both of them
  seem to have their own homegrown HTTPClient and keystone
  integration. Figured I'd ping the mailing list here before I
  go on and make my own homegrown HTTPClient as well.
 
 
  For things in the library level of a client please consider using
  keystoneclient's fairly new session layer as the basis of your HTTP
  layer.  That will also give you access to the new style auth plugins,
  assuming you want to do Keystone auth with this client.
 
 
  I'm not sure if Jamie has any examples of using this without leaning
  on the backward-compatibility bits that the existing clients need.
 
 
  The Python SDK is being built on a similar Session layer (without the
  backeard compat bits).
 
 
  dt

 I'd love to suggest following in the footsteps of the SDK, but it's just
 a little early for that.

 Today the best thing i think would be to use the session from
 keystoneclient, and copy and paste the adapter:
 https://review.openstack.org/#/c/86237/ which is approved but not in a
 release yet. A client object takes a session and kwargs and creates an
 adapter with them.

 Then reuse the managers from

 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/apiclient/base.py
 I'm personally not a fan, but most of the other clients use this layout
 and I assume you're more interested in getting things done in a standard
 way than arguing over client design (If you are interested in that the
 SDK project could always use a hand). Pass the adapter to the managers.

 Don't write a CLI, you can extend OSC to handle your new service. There
 are no docs for it (that i'm aware of) but the included services all use
 the plugin interface so you can copy from one of those.

 I don't have a pure example of these things, but if any of the above is
 unclear feel free to find me on IRC and i'll step you through it.

 Jamie

 
 
  --
 
  Dean Troyer
  dtro...@gmail.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Python 3.3 Gate is Passing!

2014-06-26 Thread Kurt Griffiths
Hi everyone, I just wanted to to congratulate Nataliia on making Marconi one of 
the first OS project to pass the py33 gate!

https://pbs.twimg.com/media/BrEQrZiCMAAbfEX.png:large

Now, let’s make that gate voting. :D

---
Kurt Griffiths (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Query on docstrings and method names

2014-06-26 Thread yang, xing
Hi Deepak,

I suggest that these two issues to be fixed in the current patch for the 
glusterfs driver first.  Then open another bug to fix them in other drivers in 
a different patch.

Thanks,
Xing



From: Deepak Shetty [mailto:dpkshe...@gmail.com]
Sent: Thursday, June 26, 2014 7:01 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Query on docstrings and method names

Hi All,
With respect to the comment made by xian-yang @
https://review.openstack.org/#/c/102496/1/manila/share/drivers/glusterfs.py

for _update_share_status and the docstring that the method has, which is
Retrieve status info from share volume group.
I have few questions based on the above...

1)  share volume group in docstring is incorrect, since its a glusterfs 
driver. But I think i know why it says volume group, probably because it came 
from lvm.py to begin with. I see that all other drivers also say volume group, 
tho' it may not be the right thing to say for their respective case.

Do we want to ensure that the docstrings are put in a way thats meaningful to 
the driver ?
2) _update_share_status method - I see the same issue here.. it says the same 
in all other drivers.. but as xian pointed, it should be rightfully called 
_update_share_stats. So should we wait for all driver to follow suit or start 
changing in the driver specific code as and when we touch that part of code ?
thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][tempest] Jenkins failure unrelated to my patch

2014-06-26 Thread Prashanth Hari
Hi,

Jenkins failing for a scenario test case patch which I submitted -
https://review.openstack.org/#/c/102827/

The failures are unrelated to my changes.

Can someone please look into this ?


2014-06-26 14:47:00.948 |
2014-06-26 14:47:00.948 |
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,image,volume]
2014-06-26 14:47:00.948 |
--
2014-06-26 14:47:00.948 |
2014-06-26 14:47:00.949 | Captured traceback:
2014-06-26 14:47:00.949 | ~~~
2014-06-26 14:47:00.949 | Traceback (most recent call last):
2014-06-26 14:47:00.949 |   File tempest/test.py, line 127, in wrapper
2014-06-26 14:47:00.949 | return f(self, *func_args, **func_kwargs)
2014-06-26 14:47:00.949 |   File
tempest/scenario/test_volume_boot_pattern.py, line 161, in
test_volume_boot_pattern
2014-06-26 14:47:00.949 | keypair)
2014-06-26 14:47:00.949 |   File
tempest/scenario/test_volume_boot_pattern.py, line 102, in _ssh_to_server
2014-06-26 14:47:00.949 | floating_ip =
self.compute_client.floating_ips.create()
2014-06-26 14:47:00.949 |   File
/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py, line
44, in create
2014-06-26 14:47:00.950 | return self._create(/os-floating-ips,
{'pool': pool}, floating_ip)
2014-06-26 14:47:00.950 |   File
/opt/stack/new/python-novaclient/novaclient/base.py, line 152, in _create
2014-06-26 14:47:00.950 | _resp, body = self.api.client.post(url,
body=body)
2014-06-26 14:47:00.950 |   File
/opt/stack/new/python-novaclient/novaclient/client.py, line 330, in post
2014-06-26 14:47:00.950 | return self._cs_request(url, 'POST',
**kwargs)
2014-06-26 14:47:00.950 |   File
/opt/stack/new/python-novaclient/novaclient/client.py, line 304, in
_cs_request
2014-06-26 14:47:00.950 | **kwargs)
2014-06-26 14:47:00.950 |   File
/opt/stack/new/python-novaclient/novaclient/client.py, line 286, in
_time_request
2014-06-26 14:47:00.950 | resp, body = self.request(url, method,
**kwargs)
2014-06-26 14:47:00.950 |   File
/opt/stack/new/python-novaclient/novaclient/client.py, line 280, in
request
2014-06-26 14:47:00.951 | raise exceptions.from_response(resp,
body, url, method)
2014-06-26 14:47:00.951 | NotFound: No more floating ips available.
(HTTP 404) (Request-ID: req-6d052d87-8162-48da-8a57-75145cdf91a9)
2014-06-26 14:47:00.951 |


2014-06-26 14:12:31.960 | tearDownClass
(tempest.api.baremetal.test_ports_negative.TestPortsNegative)
2014-06-26 14:12:31.960 |
---
2014-06-26 14:12:31.960 |
2014-06-26 14:12:31.960 | Captured traceback:
2014-06-26 14:12:31.960 | ~~~
2014-06-26 14:12:31.960 | Traceback (most recent call last):
2014-06-26 14:12:31.960 |   File tempest/api/baremetal/base.py, line
66, in tearDownClass
2014-06-26 14:12:31.960 | delete_method(u,
ignore_errors=exc.NotFound)
2014-06-26 14:12:31.960 |   File tempest/services/baremetal/base.py,
line 37, in wrapper
2014-06-26 14:12:31.960 | return f(*args, **kwargs)
2014-06-26 14:12:31.960 |   File
tempest/services/baremetal/v1/base_v1.py, line 160, in delete_node
2014-06-26 14:12:31.961 | return self._delete_request('nodes', uuid)
2014-06-26 14:12:31.961 |   File tempest/services/baremetal/base.py,
line 167, in _delete_request
2014-06-26 14:12:31.961 | resp, body = self.delete(uri)
2014-06-26 14:12:31.961 |   File tempest/common/rest_client.py, line
224, in delete
2014-06-26 14:12:31.961 | return self.request('DELETE', url,
extra_headers, headers, body)
2014-06-26 14:12:31.961 |   File tempest/common/rest_client.py, line
430, in request
2014-06-26 14:12:31.961 | resp, resp_body)
2014-06-26 14:12:31.961 |   File tempest/common/rest_client.py, line
484, in _error_checker
2014-06-26 14:12:31.961 | raise exceptions.Conflict(resp_body)
2014-06-26 14:12:31.961 | Conflict: An object with that identifier
already exists
2014-06-26 14:12:31.961 | Details: {u'error_message': u'{debuginfo:
null, faultcode: Client, faultstring: Node
9c75225e-24b7-48e6-a5f4-8f9fd4ec5691 is locked by host localhost, please
retry after the current operation is completed.\\nTraceback (most recent
call last):\\n\\n  File
\\/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py\\,
line 137, in inner\\nreturn func(*args, **kwargs)\\n\\n  File
\\/opt/stack/new/ironic/ironic/conductor/manager.py\\, line 811, in
destroy_node\\nwith task_manager.acquire(context, node_id) as
task:\\n\\n  File
\\/opt/stack/new/ironic/ironic/conductor/task_manager.py\\, line 112, in
acquire\\ndriver_name=driver_name)\\n\\n  File
\\/opt/stack/new/ironic/ironic/conductor/task_manager.py\\, line 160, in
__init__\\nself.release_resources()\\n\\n  

Re: [openstack-dev] [Murano] Nominations to Murano core

2014-06-26 Thread Georgy Okrokvertskhov
+1 for both! Thanks for the great work!

Regards,
Gosha


On Thu, Jun 26, 2014 at 7:05 AM, Timur Sufiev tsuf...@mirantis.com wrote:

 +1 for both.

 On Thu, Jun 26, 2014 at 3:29 PM, Stan Lagun sla...@mirantis.com wrote:
  +1 for both (or should I say +2?)
 
  Sincerely yours,
  Stan Lagun
  Principal Software Engineer @ Mirantis
 
 
 
  On Thu, Jun 26, 2014 at 1:49 PM, Alexander Tivelkov 
 ativel...@mirantis.com
  wrote:
 
  +1 on both Serge and Steve
 
  --
  Regards,
  Alexander Tivelkov
 
 
  On Thu, Jun 26, 2014 at 1:37 PM, Ruslan Kamaldinov
  rkamaldi...@mirantis.com wrote:
 
  I would like to nominate Serg Melikyan and Steve McLellan to Murano
 core.
 
  Serge has been a significant reviewer in the Icehouse and Juno release
  cycles.
 
  Steve has been providing consistent quality reviews and they continue
  to get more frequent and better over time.
 
 
  Thanks,
  Ruslan
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Blueprints process

2014-06-26 Thread Matthew Mosesohn
+1

Keeping features separate as blueprints (even tiny ones with no spec)
really will let us focus on the volume of real bugs.

On Tue, Jun 24, 2014 at 5:14 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 Guys,

 We have a beautiful contribution guide:
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute

 However, I would like to address several issues in our blueprints/bugs
 processes. Let's discuss and vote on my proposals.

 1) First of all, the bug counter is an excellent metric for quality. So
 let's use it only for bugs and track all feature requirement as blueprints.
 Here is what it means:

 1a) If a bug report does not describe a user’s pain, a blueprint should be
 created and bug should be closed as invalid
 1b) If a bug report does relate to a user’s pain, a blueprint should be
 created and linked to the bug
 1c) We have an excellent reporting tool, but it needs more metrics: count of
 critical/high bugs, count of bugs assigned to each team. It will require
 support of team members lists, but it seems that we really need it.


 2) We have a huge amount of blueprints and it is hard to work with this
 list. A good blueprint needs a fixed scope, spec review and acceptance
 criteria. It is obvious for me that we can not work on blueprints that do
 not meet these requirements. Therefore:

 2a) Let's copy the nova future series and create a fake milestone 'next' as
 nova does. All unclear blueprints should be moved there. We will pick
 blueprints from there, add spec and other info and target them to a
 milestone when we are really ready to work on a particular blueprint. Our
 release page will look much more close to reality and much more readable in
 this case.
 2b) Each blueprint in a milestone should contain information about feature
 lead, design reviewers, developers, qa, acceptance criteria. Spec is
 optional for trivial blueprints. If a spec is created, the designated
 reviewer(s) should put (+1) right into the blueprint description.
 2c) Every blueprint spec should be updated before feature freeze with the
 latest actual information. Actually, I'm not sure if we care about spec
 after feature development, but it seems to be logical to have correct
 information in specs.
 2d) We should avoid creating interconnected blueprints wherever possible. Of
 course we can have several blueprints for one big feature if it can be split
 into several shippable blocks for several releases or for several teams. In
 most cases, small parts should be tracked as work items of a single
 blueprint.


 3) Every review request without a bug or blueprint link should be checked
 carefully.

 3a) It should contain a complete description of what is being done and why
 3b) It should not require backports to stable branches (backports are
 bugfixes only)
 3c) It should not require changes to documentation or be mentioned in
 release notes



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Log / error message format best practices standards

2014-06-26 Thread boden
We were recently having a discussion over here in trove regarding a 
standardized format to use for log and error messages - obviously 
consistency is ideal (within and across projects). As this discussion 
involves the broader dev community, bringing this topic to the list for 
feedback...



I'm aware of the logging standards wiki[1], however this page does not 
describe in depth a standardized format to use for log / error messages.


In particular w/r/t program values in messages:

(a) For in-line program values, I've seen both single quoted and 
unquoted formatting used. e.g.

single quote: LOG.info(The ID '%s' is not invalid. % (resource.id))
unquoted: LOG.info(The ID %s is not valid. % (resource.id))

(b) For program values appended to the message, I've seen various 
formats used. e.g.

LOG.info(This path is invalid: %s % (obj.path))
LOG.info(This path is invalid %s % (obj.path))
LOG.info(This path is invalid - %s % (obj.path))


From a consistency perspective, it seems we should consider 
standardizing a best practice for such formatting.


For in-line values (#a above) I find single quotes the most consumable 
as they are a clear indication the value came from code and moreover 
provide a clear set of delimiters around the value. However to date 
unquoted appears to be the most widely used.


For appended values (#b above) I find a delimiter such as ':' most 
consumable as it provides a clear boundary between the message and 
value. Using ':' seems fairly common today, but you'll find other 
formatting throughout the code.


If we wanted to squash this topic the high level steps are (approximately):
- Determine and document message format.
- Ensure the format is part of the dev process (coding + review).
- Cross team work to address existing messages not following the format.


Thoughts / comments?


[1] https://wiki.openstack.org/wiki/LoggingStandards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Gerrit downtime on Saturday, June 28, 2014

2014-06-26 Thread James E. Blair
Hi,

On Saturday, June 28 at 15:00 UTC Gerrit will be unavailable for about
15 minutes while we rename some projects.  Existing reviews, project
watches, etc, should all be carried over.  The current list of projects
that we will rename is:

stackforge/designate - openstack/designate
openstack-dev/bash8 - openstack-dev/bashate
stackforge/murano-repository - stackforge-attic/murano-repository
stackforge/murano-metadataclient - stackforge-attic/murano-metadataclient
stackforge/murano-common - stackforge-attic/murano-common
stackforge/murano-conductor - stackforge-attic/murano-conductor
stackforge/murano-tests - stackforge-attic/murano-tests

This list is subject to change.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-neutronclient 2.3.5 released

2014-06-26 Thread Kyle Mestery
I just pushed a new version of python-neutronclient out: 2.3.5.

The main driver for this release were a couple of coordination fixes
between Nova and Neutron, most significantly the addition of the
OverQuotaClient exception in the client. There are some additional bug
fixes in this release as well.

If you hit any issues, please report them on the python-neutronclient
LP page here:

https://launchpad.net/python-neutronclient

Thanks!
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-26 Thread Miguel Angel Ajo Pelayo
Yes, once a connection has past the nat tables, 
and it's on the kernel connection tracker, it
will keep working even if you remove the nat rule.

Doing that would require manipulating the kernel
connection tracking to kill that connection, 
I'm not familiar with that part of the linux network
stack, not sure if it's possible, but that would be
the perfect way. (kill nat connection on ext ip=float ip int_ip = internal 
ip)...




- Original Message -
 Hi folks,
 
 After we create an SSH connection to a VM via its floating ip, even though we
 have removed the floating ip association, we can still access the VM via
 that connection. Namely, SSH is not disconnected when the floating ip is not
 valid. Any good solution about this security issue?
 
 Thanks
 Xurong Yang
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-26 Thread Clark, Robert Graham
It¹s kinda ugly, if a user through API/Horizon thinks they¹ve isolated a
host, it should be isolatedŠ

I smell an OSSN here...

On 26/06/2014 17:57, Miguel Angel Ajo Pelayo mangel...@redhat.com
wrote:

Yes, once a connection has past the nat tables,
and it's on the kernel connection tracker, it
will keep working even if you remove the nat rule.

Doing that would require manipulating the kernel
connection tracking to kill that connection,
I'm not familiar with that part of the linux network
stack, not sure if it's possible, but that would be
the perfect way. (kill nat connection on ext ip=float ip int_ip =
internal ip)...




- Original Message -
 Hi folks,
 
 After we create an SSH connection to a VM via its floating ip, even
though we
 have removed the floating ip association, we can still access the VM via
 that connection. Namely, SSH is not disconnected when the floating ip
is not
 valid. Any good solution about this security issue?
 
 Thanks
 Xurong Yang
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][VMware] Blueprints Approval requiest

2014-06-26 Thread Evgeniya Shumakher
Hi folks,

The Partner Integrations team has been working on the VMware integration
with MOS 5.1.

We created 3 blueprints :
https://blueprints.launchpad.net/fuel/+spec/vcenter-hv-full-scale-support
https://blueprints.launchpad.net/fuel/+spec/neutron-nsx-plugin-integration
https://blueprints.launchpad.net/fuel/+spec/vcenter-nsx-support

As you may notice all of them have the 'Pending Approval' status.
We kindly ask the Fuel-core team to review the Design Documents we created:

   -
   
https://docs.google.com/a/mirantis.com/document/d/17mjjabQd0N9sXpORE04Bz5t_BEvNWPRD1bjWPzq_VkE/edit#
   (the Full Scale part)
   -
   
https://docs.google.com/a/mirantis.com/document/d/1QW6VNWlO7RHVL9R_kv4xFfnyjc2FTfXPRbSta1Sh8jc/edit#heading=h.mfz7sg8a9we7
   - https://review.openstack.org/#/c/100185/


Vitaly Kramskikh we are counting on you in the UI part. Andrew Woodward
please review the rest of the design.

Your help would be appreciated.

-- 
Regards,
Evgeniya
Mirantis, Inc

Mob.phone: +7 (968) 760-98-42
Email: eshumak...@mirantis.com
Skype: eshumakher
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-26 Thread Miguel Angel Ajo Pelayo
- Original Message -
 @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
 
 Another idea:
 What about creating a RPC topic per security group (quid of the RPC topic
 scalability) on which an agent subscribes if one of its ports is associated
 to the security group?
 
 Regards,
 Édouard.
 
 


Hmm, Interesting,

@Nachi, I'm not sure I fully understood:


SG_LIST [ SG1, SG2]
SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
port[SG_ID1, SG_ID2], port2 , port3


Probably we may need to include also the 
SG_IP_LIST = [SG_IP1, SG_IP2] ...


and let the agent do all the combination work.

Something like this could make sense?

Security_Groups = {SG1:{IPs:[],RULES:[],
   SG2:{IPs:[],RULES:[]} 
  }

Ports = {Port1:[SG1, SG2], Port2: [SG1]  }


@Edouard, actually I like the idea of having the agent subscribed
to security groups they have ports on... That would remove the need to include
all the security groups information on every call...

But would need another call to get the full information of a set of security 
groups
at start/resync if we don't already have any. 


 
 On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang  ayshihanzh...@126.com  wrote:
 
 
 
 hi Miguel Ángel,
 I am very agree with you about the following point:
   * physical implementation on the hosts (ipsets, nftables, ... )
 --this can reduce the load of compute node.
   * rpc communication mechanisms.
 -- this can reduce the load of neutron server
 can you help me to review my BP specs?
 
 
 
 
 
 
 
 At 2014-06-19 10:11:34, Miguel Angel Ajo Pelayo  mangel...@redhat.com 
 wrote:
 
   Hi it's a very interesting topic, I was getting ready to raise
 the same concerns about our security groups implementation, shihanzhang
 thank you for starting this topic.
 
   Not only at low level where (with our default security group
 rules -allow all incoming from 'default' sg- the iptable rules
 will grow in ~X^2 for a tenant, and, the security_group_rules_for_devices
 rpc call from ovs-agent to neutron-server grows to message sizes of 100MB,
 generating serious scalability issues or timeouts/retries that
 totally break neutron service.
 
(example trace of that RPC call with a few instances
  http://www.fpaste.org/104401/14008522/ )
 
   I believe that we also need to review the RPC calling mechanism
 for the OVS agent here, there are several possible approaches to breaking
 down (or/and CIDR compressing) the information we return via this api call.
 
 
So we have to look at two things here:
 
   * physical implementation on the hosts (ipsets, nftables, ... )
   * rpc communication mechanisms.
 
Best regards,
 Miguel Ángel.
 
 - Mensaje original -
 
  Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
  It also based on the rule set mechanism.
  The issue in that proposition, it's only stable since the begin of the
  year
  and on Linux kernel 3.13.
  But there lot of pros I don't list here (leverage iptables limitation,
  efficient update rule, rule set, standardization of netfilter
  commands...).
 
  Édouard.
 
  On Thu, Jun 19, 2014 at 8:25 AM, henry hly  henry4...@gmail.com  wrote:
 
   we have done some tests, but have different result: the performance is
   nearly
   the same for empty and 5k rules in iptable, but huge gap between
   enable/disable iptable hook on linux bridge
  
 
   On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang  ayshihanzh...@126.com 
   wrote:
  
 
Now I have not get accurate test data, but I can confirm the following
points:
   
  
1. In compute node, the iptable's chain of a VM is liner, iptable
filter
it
one by one, if a VM in default security group and this default
security
group have many members, but ipset chain is set, the time ipset filter
one
and many member is not much difference.
   
  
2. when the iptable rule is very large, the probability of failure
that
iptable-save save the iptable rule is very large.
   
  
 
At 2014-06-19 10:55:56, Kevin Benton  blak...@gmail.com  wrote:
   
  
 
 This sounds like a good idea to handle some of the performance
 issues
 until
 the ovs firewall can be implemented down the the line.

   
  
 Do you have any performance comparisons?

   
  
 On Jun 18, 2014 7:46 PM, shihanzhang  ayshihanzh...@126.com 
 wrote:

   
  
 
  Hello all,
 

   
  
 
  Now in neutron, it use iptable implementing security group, but
  the
  performance of this implementation is very poor, there is a bug:
  https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this
  problem.
  In
  his test, w ith default security groups(which has remote security
  group),
  beyond 250-300 VMs, there were around 6k Iptable rules on evry
  compute
  node,
  although his patch can reduce the processing time, but it don't
  solve
  this
  problem 

Re: [openstack-dev] [horizon][novnc] Browser Usage for noVNC

2014-06-26 Thread Solly Ross
Hi Anne,

Thanks for the numbers.  I suspect the user numbers might be a bit different
(deployers and OpenStack devs might have the latest versions of software on
their machines, while users might have a corporate build that has older
software), but it's good to get those initial statistics.

Those poor people using IE 7...

Best Regards,
Solly Ross

- Original Message -
 From: Anne Gentle a...@openstack.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, June 26, 2014 11:05:58 AM
 Subject: Re: [openstack-dev] [horizon][novnc] Browser Usage for noVNC
 
 
 
 
 On Thu, Jun 26, 2014 at 9:54 AM, Solly Ross  sr...@redhat.com  wrote:
 
 
 It was recommended I cross-post this for visibility.
 Devs are welcome to provide feedback as well ;-)
 
 Best Regards,
 Solly Ross
 
 - Forwarded Message -
 From: Solly Ross  sr...@redhat.com 
 To: openstack-operat...@lists.openstack.org
 Sent: Tuesday, June 24, 2014 1:29:15 PM
 Subject: Browser Usage for noVNC
 
 Hello Operators,
 
 I'm part of the noVNC upstream development team. noVNC, for those of you who
 don't know, is the HTML5 VNC client
 integrated into Horizon (the OpenStack dashboard). We are considering
 removing support for some older browsers, and
 wanted to make sure that we wouldn't be inconveniencing anybody too much. Are
 there any operators who still aim to
 support connecting with any of the following browsers:
 
 Hi Solly,
 I don't know if this data helps, but I took a quick look at the web analytics
 for people reading docs.openstack.org , and all of your listed browser
 versions are under 2% (most under 1%) for web site readers.
 
 Well, except IE 7 hangs in there at 4.43%. :)
 
 Let me know if that's useful.
 Anne
 
 
 
 
 - Firefox  11.0
 - Chrome  16.0
 - IE  10.0
 - Safari  6.0
 (insert uncommon browser here that doesn't support WebSockets natively)
 
 If so, what is your minimum browser version?
 
 Best Regards,
 Solly Ross
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Removing translations from debug logging where exception formatted into the message

2014-06-26 Thread Kuvaja, Erno
Hi,

We hit nasty situation where _() was removed from DEBUG level logging and there 
is exception included like following:
msg = Forbidden upload attempt: %s % e
This caused gettextutils raising UnicodeError:
2014-06-26 18:16:24.221 |   File glance/openstack/common/gettextutils.py, 
line 333, in __str__
2014-06-26 18:16:24.222 | raise UnicodeError(msg)
2014-06-26 18:16:24.222 | UnicodeError: Message objects do not support str() 
because they may contain non-ascii characters. Please use unicode() or 
translate() instead.
(For Example 
http://logs.openstack.org/63/102863/1/check/gate-glance-python27/6ad16a3/console.html#_2014-06-26_15_57_12_262)

As discussed with mriedm, jecarey and dhellmann on #openstack-oslo this can be 
avoided by specifying the message being unicode like:
msg = uForbidden upload attempt: %s % e

For us in Glance it caused bunch of gating issues, so hopefully this helps the 
rest of the projects avoiding same, or at least tackling it bit faster.


-  Erno (jokke_) Kuvaja
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Upgrade of Hadoop components inside released version

2014-06-26 Thread Sergey Lukjanov
It was discussed on today's irc team meeting [0] and we've agreed that
in this case everything is ok but a small doc with notice needed.

So, in a few words, 98260 just updates the Ambari version, but not a
version of Hadoop / HDP and it means that plugin version isn't changed
too. Just a notice about need to upgrade Ambari should be added to
docs for the case when users will upgrade OpenStack from Icehouse to
Juno. We already have page in docs for such upgrade notes [1].

[0] 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-06-26-18.01.html
[1] 
http://docs.openstack.org/developer/sahara/userdoc/upgrade.guide.html#icehouse-juno

Thanks.

On Wed, Jun 25, 2014 at 11:41 PM, Erik Bergenholtz
ebergenho...@hortonworks.com wrote:
 Team -

 Please see in-line for my thoughts/opinions on the topic:


 From: Andrew Lazarev alaza...@mirantis.com
 Subject: [openstack-dev] [sahara] Upgrade of Hadoop components inside
 released version
 Date: June 24, 2014 at 5:20:27 PM EDT
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Reply-To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org

 Hi Team,

 I want to raise topic about upgrade of components in Hadoop version that is
 already supported by released Sahara plugin. The question is raised because
 of several change requests [1] and [2]. Topic was discussed in Atlanta
 ([3]), but we didn't come to the decision.


 Any future policy that is put in place must provide the ability for a plugin
 to move forward in terms of functionality. Each plugin, depending on its
 implementation is going to have limitations, sometimes with backwards
 compatibility. This is not a function of Sahara proper, but possibly of
 Hadoop and or the distribution in question that the plugin implements. Each
 vendor/plugin should be allowed to control what they do or do not support.

 With regards to the code submissions that are being delayed by lack of
 backwards compatibility policy ([1] [2]), it is my opinion that they should
 be allowed to move forward as there is no policy in place that is being
 challenged and/or violated. However, these code submission serve as a good
 vehicle for discussing said compatibility policy.


 All of us agreed that existing clusters must continue to work after
 OpenStack upgrade. So if user creates cluster by Icehouse Sahara and then
 upgrades OpenStack - everything should continue working as before. The most
 tricky operation is scaling and it dictates list of restrictions over new
 version of component:

 1. plugin-version pair supported by the plugin must not change
 2. if component upgrade requires DIB involved then plugin must work with
 both versions of image - old and new one
 3. cluster with mixed nodes (created by old code and by new one) should
 still be operational

 Given that we should choose policy for components upgrade. Here are several
 options:

 1. Prohibit components upgrade in released versions of plugin. Change plugin
 version even if hadoop version didn't change. This solves all listed
 problems but a little bit frustrating for user. They will need to recreate
 all clusters they have and migrate data like as it is hadoop upgrade. They
 should also consider Hadoop upgrade to do migration only once.


 Re-creating a cluster just because the version of a plugin (or Sahara) has
 changed is very unlikely to occur in the real world as this could easily
 involve 1,000’s of nodes and many petabytes of data. There must be a more
 compelling reason to recreate a cluster than plugin/sahara has changed.
 What’s more likely is that cluster that is provisioned which is rendered
 incompatible with a future version of a plugin will result in an
 administrator making use of the ‘native’ management capabilities provided by
 the Hadoop distribution; in the case of HDP, this would be Ambari. Clusters
 can be completely managed through Ambari, including migration, scaling etc.
 It’s only the VM resources that are not managed by Ambari, but this is a
 relatively simple proposition.


 2. Disable some operations over cluster created by the previous version. If
 users don't have option to scale cluster there will be no problems with
 mixed nodes. For this option Sahara need to know if the cluster was created
 by this version or not.


 If for some reason a change is introduced in a plugin that renders it
 incompatible across either Hadoop OR OpenStack versions, it should still be
 possible to make such change in favor of moving the state of the art
 forward. Such incompatibility may be difficult (read expensive) or
 impossible to avoid. The requirement should be to specify the
 upgrade/migration support (through documentation) specifically with respect
 to scaling.


 3. Require change author to perform all kind of tests and prove that mixed
 cluster works as good and not mixed. In such case we need some list of tests
 that are enough to cover all 

Re: [openstack-dev] [TripleO] os-refresh-config run frequency

2014-06-26 Thread Clint Byrum
Excerpts from Macdonald-Wallace, Matthew's message of 2014-06-26 04:13:31 -0700:
 Hi all,
 
 I've been working more and more with TripleO recently and whilst it does seem 
 to solve a number of problems well, I have found a couple of idiosyncrasies 
 that I feel would be easy to address.
 
 My primary concern lies in the fact that os-refresh-config does not run on 
 every boot/reboot of a system.  Surely a reboot *is* a configuration change 
 and therefore we should ensure that the box has come up in the expected state 
 with the correct config?
 
 This is easily fixed through the addition of an @reboot entry in 
 /etc/crontab to run o-r-c or (less easily) by re-designing o-r-c to run as a 
 service.
 
 My secondary concern is that through not running os-refresh-config on a 
 regular basis by default (i.e. every 15 minutes or something in the same 
 style as chef/cfengine/puppet), we leave ourselves exposed to someone trying 
 to make a quick fix to a production node and taking that node offline the 
 next time it reboots because the config was still left as broken owing to a 
 lack of updates to HEAT (I'm thinking a quick change to allow root access 
 via SSH during a major incident that is then left unchanged for months 
 because no-one updated HEAT).
 
 There are a number of options to fix this including Modifying 
 os-collect-config to auto-run os-refresh-config on a regular basis or setting 
 os-refresh-config to be its own service running via upstart or similar that 
 triggers every 15 minutes
 
 I'm sure there are other solutions to these problems, however I know from 
 experience that claiming this is solved through education of users or (more 
 severely!) via HR is not a sensible approach to take as by the time you 
 realise that your configuration has been changed for the last 24 hours it's 
 often too late!
 

So I see two problems highlighted above. 

1) We don't re-assert ephemeral state set by o-r-c scripts. You're right,
and we've been talking about it for a while. The right thing to do is
have os-collect-config re-run its command on boot. I don't think a cron
job is the right way to go, we should just have a file in /var/run that
is placed there only on a successful run of the command. If that file
does not exist, then we run the command.

I've just opened this bug in response:

https://bugs.launchpad.net/os-collect-config/+bug/1334804

2) We don't re-assert any state on a regular basis.

So one reason we haven't focused on this, is that we have a stretch goal
of running with a readonly root partition. It's gotten lost in a lot of
the craziness of just get it working, but with rebuilds blowing away
root now, leading to anything not on the state drive (/mnt currently),
there's a good chance that this will work relatively well.

Now, since people get root, they can always override the readonly root
and make changes. golemwe hates thiss!/golem.

I'm open to ideas, however, os-refresh-config is definitely not the
place to solve this. It is intended as a non-resident command to be
called when it is time to assert state. os-collect-config is intended
to gather configurations, and expose them to a command that it runs,
and thus should be the mechanism by which os-refresh-config is run.

I'd like to keep this conversation separate from one in which we discuss
more mechanisms to make os-refresh-config robust. There are a bunch of
things we can do, but I think we should focus just on how do we
re-assert state?.

Because we're able to say right now that it is only for running when
config changes, we can wave our hands and say it's ok that we restart
everything on every run. As Jan alluded to, that won't work so well if
we run it every 20 minutes.

So, I wonder if we can introduce a config version into
os-collect-config.

Basically os-collect-config would keep a version along with its cache.
Whenever a new version is detected, os-collect-config would set a value
in the environment that informs the command this is a new version of
config. From that, scripts can do things like this:

if [ -n $OS_CONFIG_NEW_VERSION ] ; then
  service X restart
else
  if !service X status ; then service X start
fi

This would lay the groundwork for future abilities to compare old/new so
we can take shortcuts by diffing the two config versions. For instance
if we look at old vs. new and we don't see any of the keys we care about
changed, we can skip restarting.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-26 Thread Vishvananda Ishaya
I believe this will affect nova-network as well. We probably should use 
something like the linux cutter utility to kill any ongoing connections after 
we remove the nat rule.

Vish

On Jun 25, 2014, at 8:18 PM, Xurong Yang ido...@gmail.com wrote:

 Hi folks,
 
 After we create an SSH connection to a VM via its floating ip, even though we 
 have removed the floating ip association, we can still access the VM via that 
 connection. Namely, SSH is not disconnected when the floating ip is not 
 valid. Any good solution about this security issue?
 
 Thanks
 Xurong Yang 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-refresh-config run frequency

2014-06-26 Thread Chris Jones
Hi

Given...

On 26 Jun 2014, at 20:20, Clint Byrum cl...@fewbar.com wrote:
 we should just have a file in /var/run that

and...

 I think we should focus just on how do we re-assert state?.

... for the reboot case, we could have os-collect-config check for the presence 
of the /var/run file when it starts, if it doesn't find it, unconditionally 
call o-r-c and then write out the file. Given that we're starting o-c-c on 
boot, this seems like a fairly simple way to get o-r-c to run on boot (and one 
that could be trivially disabled by configuration or just dumbly pre-creating 
the /var/run file).

 Whenever a new version is detected, os-collect-config would set a value
 in the environment that informs the command this is a new version of

I like the idea of exposing the fact that a new config version has arrived, to 
o-r-c scripts, but...

  if !service X status ; then service X start

... I always worry when I see suggestions to have periodic state-assertion 
tasks take care of starting services that are not running, but in this case I 
will try to calm my nerves with the knowledge that service(1) is almost 
certainly talking to a modern init which is perfectly capable of supervising 
daemons :D

Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-06-26 Thread Rob Crittenden
Mark Kirkwood wrote:
 On 25/06/14 10:59, Rob Crittenden wrote:
 Before I get punted onto the operators list, I post this here because
 this is the default config and I'd expect the defaults to just work.

 Running devstack inside a VM with a single NIC configured and this in
 localrc:

 disable_service n-net
 enable_service q-svc
 enable_service q-agt
 enable_service q-dhcp
 enable_service q-l3
 enable_service q-meta
 enable_service neutron
 Q_USE_DEBUG_COMMAND=True

 Results in a successful install but no DHCP address assigned to hosts I
 launch and other oddities like no CIDR in nova net-list output.

 Is this still the default way to set things up for single node? It is
 according to https://wiki.openstack.org/wiki/NeutronDevstack


 
 That does look ok: I have an essentially equivalent local.conf:
 
 ...
 ENABLED_SERVICES+=,-n-net
 ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest
 
 I don't have 'neutron' specifically enabled... not sure if/why that
 might make any difference tho. However instance launching and ip address
 assignment seem to work ok.
 
 However I *have* seen the issue of instances not getting ip addresses in
 single host setups, and it is often due to use of virt io with bridges
 (with is the default I think). Try:
 
 nova.conf:
 ...
 libvirt_use_virtio_for_bridges=False

Thanks for the suggestion. At least in master this was replaced by a new
section, libvirt, but even setting it to False didn't do the trick for
me. I see the same behavior.

thanks

rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Log / error message format best practices standards

2014-06-26 Thread Ahmed RAHAL

Hi,

Le 2014-06-26 12:14, boden a écrit :

We were recently having a discussion over here in trove regarding a
standardized format to use for log and error messages - obviously
consistency is ideal (within and across projects). As this discussion
involves the broader dev community, bringing this topic to the list for
feedback...

[...]

For in-line values (#a above) I find single quotes the most consumable
as they are a clear indication the value came from code and moreover
provide a clear set of delimiters around the value. However to date
unquoted appears to be the most widely used.


+1


For appended values (#b above) I find a delimiter such as ':' most
consumable as it provides a clear boundary between the message and
value. Using ':' seems fairly common today, but you'll find other
formatting throughout the code.


+1
--

Ahmed

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-26 Thread Vishvananda Ishaya
I missed that going in, but it appears that clean_conntrack is not done on
disassociate, just during migration. It sounds like we should remove the
explicit call in migrate, and just always call it from remove_floating_ip.

Vish

On Jun 26, 2014, at 1:48 PM, Brian Haley brian.ha...@hp.com wrote:

 Signed PGP part
 I believe nova-network does this by using 'conntrack -D -r $fixed_ip' when the
 floating IP goes away (search for clean_conntrack), Neutron doesn't when it
 removes the floating IP.  Seems like it's possible to close most of that gap
 in the l3-agent - when it removes the IP from it's qg- interface it can do a
 similar operation.
 
 -Brian
 
 On 06/26/2014 03:36 PM, Vishvananda Ishaya wrote:
  I believe this will affect nova-network as well. We probably should use
  something like the linux cutter utility to kill any ongoing connections
  after we remove the nat rule.
 
  Vish
 
  On Jun 25, 2014, at 8:18 PM, Xurong Yang ido...@gmail.com wrote:
 
  Hi folks,
 
  After we create an SSH connection to a VM via its floating ip, even
  though we have removed the floating ip association, we can still access
  the VM via that connection. Namely, SSH is not disconnected when the
  floating ip is not valid. Any good solution about this security issue?
 
  Thanks Xurong Yang ___
  OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___ OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Two questions about 'backup' API

2014-06-26 Thread Vishvananda Ishaya
On Jun 26, 2014, at 5:07 AM, wu jiang win...@gmail.com wrote:

 Hi all,
 
 I tested the 'backup' API recently and got two questions about it:
 
 1. Why 'daily'  'weekly' appear in code comments  novaclient about 
 'backup_type' parameter?
 
 The 'backup_type' parameter is only a tag for this backup(image).
 And there isn't corresponding validation for 'backup_type' about these two 
 types.
 
 Moreover, there is also no periodic_task for 'backup' in compute host.
 (It's fair to leave the choice to other third-parts system)
 
 So, why we leave 'daily | weekly' example in code comments and novaclient?
 IMO it may lead confusion that Nova will do more actions for 'daily|weekly' 
 backup request.

The tag affects the cleanup of old copies, so if you do a tag of ‘weekly’ and
the rotation is 3, it will insure you only have 3 copies that are tagged weekly.
You could also have 3 copies of the daily tag as well.

 
 2. Is it necessary to backup instance when 'rotation' is equal to 0?
 
 Let's look at related codes in nova/compute/manager.py:
 # def backup_instance(self, context, image_id, instance, backup_type, 
 rotation):
 #
 #self._do_snapshot_instance(context, image_id, instance, rotation)
 #self._rotate_backups(context, instance, backup_type, rotation)
 
 I knew Nova will delete all backup images according the 'backup_type' 
 parameter when 'rotation' equals to 0.
 
 But according the logic above, Nova will generate one new backup in 
 _do_snapshot_instance(), and delete it in _rotate_backups()..
 
 It's weird to snapshot a useless backup firstly IMO. 
 We need to add one new branch here: if 'rotation' is equal to 0, no need to 
 backup, just rotate it.

That makes sense I suppose.

Vish

 
 
 So, what's your opinions? Look forward to your suggestion.
 Thanks.
 
 WingWJ
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-26 Thread Vishvananda Ishaya
Thanks WingWJ. It would also be great to track this in a bug.

Vish

On Jun 26, 2014, at 5:30 AM, wu jiang win...@gmail.com wrote:

 Hi Phil, 
 
 Ok, I'll submit a patch to add a new task_state(like 'STARTING_BUILD') in 
 these two days. 
 And related modifications will be definitely added in the Doc.
 
 Thanks for your help. :)
 
 WingWJ
 
 
 On Thu, Jun 26, 2014 at 6:42 PM, Day, Phil philip@hp.com wrote:
 Why do others think – do we want a spec to add an additional task_state value 
 that will be set in a well defined place.   Kind of feels overkill for me in 
 terms of the review effort that would take compared to just reviewing the 
 code - its not as there are going to be lots of alternatives to consider here.
 
  
 
 From: wu jiang [mailto:win...@gmail.com] 
 Sent: 26 June 2014 09:19
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Why is there a 'None' task_state between 
 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?
 
  
 
  Hi Phil, 
 
  
 
 thanks for your reply. So should I need to submit a patch/spec to add it now?
 
  
 
 On Wed, Jun 25, 2014 at 5:53 PM, Day, Phil philip@hp.com wrote:
 
 Looking at this a bit deeper the comment in _start_buidling() says that its 
 doing this to “Save the host and launched_on fields and log appropriately “.  
   But as far as I can see those don’t actually get set until the claim is 
 made against the resource tracker a bit later in the process, so this whole 
 update might just be not needed – although I still like the idea of a state 
 to show that the request has been taken off the queue by the compute manager.
 
  
 
 From: Day, Phil 
 Sent: 25 June 2014 10:35
 
 
 To: OpenStack Development Mailing List
 
 Subject: RE: [openstack-dev] [nova] Why is there a 'None' task_state between 
 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?
 
  
 
 Hi WingWJ,
 
  
 
 I agree that we shouldn’t have a task state of None while an operation is in 
 progress.  I’m pretty sure back in the day this didn’t use to be the case and 
 task_state stayed as Scheduling until it went to Networking  (now of course 
 networking and BDM happen in parallel, so you have to be very quick to see 
 the Networking state).
 
  
 
 Personally I would like to see the extra granularity of knowing that a 
 request has been started on the compute manager (and knowing that the request 
 was started rather than is still sitting on the queue makes the decision to 
 put it into an error state when the manager is re-started more robust).
 
  
 
 Maybe a task state of “STARTING_BUILD” for this case ?
 
  
 
 BTW I don’t think _start_building() is called anymore now that we’ve switched 
 to conductor calling build_and_run_instance() – but the same task_state issue 
 exist in there well.
 
  
 
 From: wu jiang [mailto:win...@gmail.com]
 
 Sent: 25 June 2014 08:19
 To: OpenStack Development Mailing List
 
 Subject: [openstack-dev] [nova] Why is there a 'None' task_state between 
 'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?
 
  
 
 Hi all, 
 
  
 
 Recently, some of my instances were stuck in task_state 'None' during VM 
 creation in my environment.
 
  
 
 So I checked  found there's a 'None' task_state between 'SCHEDULING'  
 'BLOCK_DEVICE_MAPPING'.
 
  
 
 The related codes are implemented like this:
 
  
 
 #def _start_building():
 
 #self._instance_update(context, instance['uuid'],
 
 #  vm_state=vm_states.BUILDING,
 
 #  task_state=None,
 
 #  expected_task_state=(task_states.SCHEDULING,
 
 #   None))
 
  
 
 So if compute node is rebooted after that procession, all building VMs on it 
 will always stay in 'None' task_state. And it's useless and not convenient 
 for locating problems. 
 
  
 
 Why not a new task_state for this step? 
 
  
 
  
 
 WingWJ
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-26 Thread Giulio Fidente

On 06/26/2014 11:11 AM, Jan Provaznik wrote:

On 06/25/2014 06:58 PM, Giulio Fidente wrote:

On 06/16/2014 11:14 PM, Clint Byrum wrote:

Excerpts from Gregory Haynes's message of 2014-06-16 14:04:19 -0700:

Excerpts from Jan Provazník's message of 2014-06-16 20:28:29 +:

Hi,
MariaDB is now included in Fedora repositories, this makes it
easier to
install and more stable option for Fedora installations. Currently
MariaDB can be used by including mariadb (use mariadb.org pkgs) or
mariadb-rdo (use redhat RDO pkgs) element when building an image. What
do you think about using MariaDB as default option for Fedora when
running devtest scripts?


(first, I believe Jan means that MariaDB _Galera_ is now in Fedora)


I think so too.


Id like to give this a try. This does start to change us from being a
deployment of openstck to being a deployment per distro but IMO thats a
reasonable position.

Id also like to propose that if we decide against doing this then these
elements should not live in tripleo-image-elements.


I'm not so sure I agree. We have lio and tgt because lio is on RHEL but
everywhere else is still using tgt IIRC.

However, I also am not so sure that it is actually a good idea for
people
to ship on MariaDB since it is not in the gate. As it diverges from
MySQL
(starting in earnest with 10.x), there will undoubtedly be subtle issues
that arise. So I'd say having MariaDB get tested along with Fedora will
actually improve those users' test coverage, which is a good thing.


I am favourable to the idea of switching to mariadb for fedora based
distros.

Currently the default mysql element seems to be switching [1], yet for
ubuntu/debian only, from the percona provided binary tarball of mysql to
the percona provided packaged version of mysql.

In theory we could further update it to use percona provided packages of
mysql on fedora too but I'm not sure there is much interest in using
that combination where people gets mariadb and galera from the official
repos.



IIRC fedora packages for percona xtradb cluster are not provided (unless
something has changed recently).


I see, so on fedora it will be definitely easier and safer to just use 
the mariadb/galera packages provided in the official repo ... and this 
further reinforces my idea that it is the best option to use that by 
default for fedora


--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Virtual Interface creation failed

2014-06-26 Thread Vishvananda Ishaya
I have seen something like this before with nova-network and it was due to the 
number of requests the rpc call timeout gets hit for allocate_network. You 
might need to set your rpc_response_timeout to something greater. I think it 
defaults to 60 seconds.

Vish

On Jun 25, 2014, at 6:57 AM, tfre...@redhat.com wrote:

 Hello,
 
 During the tests of Multiple RPC, I've encountered a problem to create VMs.
 Creation of 180 VMs succeeded.
 
 But when I've tried to create 200 VMs, part of the VMs failed with resources 
 problem of VCPU limitation, the other part failed with following error:
 vm failed -  {message: Virtual Interface creation failed, code: 500, 
 created: 2014-06-25T10:22:35Z} | | flavor | nano (10)   
 
 We can see from the Neutron server and Nova API logs, that Neutron got the 
 Nova request and responded to it, but this connection fails somewhere between 
 Nova API and Nova Compute.
 
 Please see the exact logs: http://pastebin.test.redhat.com/217653
 
 
 Tested with latest Icehouse version on RHEL 7.
 Controller + Compute Node
 
 All Nova and Neutron logs are attached.
 
 Is this a known issue?
 -- 
 Thanks,
 Toni
 multiple_vm_neutron_log.tar.gzmultiple_vm_nova_log.tar.gz___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on Saturday, June 28, 2014

2014-06-26 Thread Hayes, Graham
Hi,

We (designate) have 2 other repos that need to be moved as well

stackforge/python-designateclient - openstack/python-designateclient
stackforge/designate-specs - openstack/designate-specs

- Graham


On Thu, 2014-06-26 at 09:33 -0700, James E. Blair wrote:


Hi,

On Saturday, June 28 at 15:00 UTC Gerrit will be unavailable for about
15 minutes while we rename some projects.  Existing reviews, project
watches, etc, should all be carried over.  The current list of projects
that we will rename is:

stackforge/designate - openstack/designate
openstack-dev/bash8 - openstack-dev/bashate
stackforge/murano-repository - stackforge-attic/murano-repository
stackforge/murano-metadataclient - stackforge-attic/murano-metadataclient
stackforge/murano-common - stackforge-attic/murano-common
stackforge/murano-conductor - stackforge-attic/murano-conductor
stackforge/murano-tests - stackforge-attic/murano-tests

This list is subject to change.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Sergey Kraynev for heat-core

2014-06-26 Thread Steve Baker
I'd like to nominate Sergey Kraynev for heat-core. His reviews are
valuable and prolific, and his commits have shown a sound understanding
of heat internals.

http://stackalytics.com/report/contribution/heat-group/60

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sergey Kraynev for heat-core

2014-06-26 Thread Zane Bitter

On 26/06/14 18:08, Steve Baker wrote:

I'd like to nominate Sergey Kraynev for heat-core. His reviews are
valuable and prolific, and his commits have shown a sound understanding
of heat internals.

http://stackalytics.com/report/contribution/heat-group/60


+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume creation failed.

2014-06-26 Thread Duncan Thomas
By default, devstack does not keep the logs. See the section Screen
logging on http://devstack.org/configuration.html for how to turn it
on

On 26 June 2014 13:53, Yogesh Prasad yogesh.pra...@cloudbyte.com wrote:
 Hi,

 I have a devstack setup.
 Please tell me, how i can create separate log file for each type of logs.
 like cinder-api, cinder-scheduler and cinder-volume logs.


 On Thu, Jun 26, 2014 at 5:49 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 I'm afraid that isn't the log we need to diagnose your problem. Can
 you put cinder-api, cinder-scheduler and cinder-volume logs up please?

 On 26 June 2014 13:12, Yogesh Prasad yogesh.pra...@cloudbyte.com wrote:
  Hi All,
 
  I have a devstack setup , and i am trying to create a volume but it is
  creating with error status.
  Can any one tell me what is the problem?
 
  Screen logs --
 
  .py:297
  2014-06-26 17:37:04.370 DEBUG keystone.notifications [-] CADF Event:
  {'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event',
  'initiator':
  {'typeURI': 'service/security/account/user', 'host': {'agent':
  'python-keystoneclient', 'address': '20.10.22.245'}, 'id':
  'openstack:d58d5688-f604-4362-9069-8cb217c029c8', 'name':
  u'6fcd84d16da646dc825411da06bf26b2'}, 'target': {'typeURI':
  'service/security/account/user', 'id':
  'openstack:85ef43dd-b0ab-4726-898e-36107b06a231'}, 'observer':
  {'typeURI':
  'service/security', 'id':
  'openstack:120866e8-51b9-4338-b41b-2dbea3aa4f17'},
  'eventType': 'activity', 'eventTime': '2014-06-26T12:07:04.368547+',
  'action': 'authenticate', 'outcome': 'success', 'id':
  'openstack:dda01da7-1274-4b4f-8ff5-1dcdb6d80ff4'} from (pid=7033)
  _send_audit_notification
  /opt/stack/keystone/keystone/notifications.py:297
  2014-06-26 17:37:04.902 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
  [26/Jun/2014 17:37:04] POST /v2.0//tokens HTTP/1.1 200 6913 0.771471
  2014-06-26 17:37:04.992 DEBUG keystone.middleware.core [-] RBAC:
  auth_context: {'is_delegated_auth': False, 'user_id':
  u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
  'trustee_id': None, 'trustor_id': None, 'project_id':
  u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
  process_request /opt/stack/keystone/keystone/middleware/core.py:286
  2014-06-26 17:37:05.009 DEBUG keystone.common.wsgi [-] arg_dict: {} from
  (pid=7033) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181
  2014-06-26 17:37:05.023 DEBUG keystone.common.controller [-] RBAC:
  Authorizing identity:revocation_list() from (pid=7033)
  _build_policy_check_credentials
  /opt/stack/keystone/keystone/common/controller.py:54
  2014-06-26 17:37:05.027 DEBUG keystone.common.controller [-] RBAC: using
  auth context from the request environment from (pid=7033)
  _build_policy_check_credentials
  /opt/stack/keystone/keystone/common/controller.py:59
  2014-06-26 17:37:05.033 DEBUG keystone.policy.backends.rules [-] enforce
  identity:revocation_list: {'is_delegated_auth': False, 'user_id':
  u'27353284443e43278600949a1467c65f', 'roles': [u'admin', u'_member_'],
  'trustee_id': None, 'trustor_id': None, 'project_id':
  u'e19957e0d69c4bfc9a9f872a2fcee1a3', 'trust_id': None} from (pid=7033)
  enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101
  2014-06-26 17:37:05.040 DEBUG keystone.openstack.common.policy [-] Rule
  identity:revocation_list will be now enforced from (pid=7033) enforce
  /opt/stack/keystone/keystone/openstack/common/policy.py:288
  2014-06-26 17:37:05.043 DEBUG keystone.common.controller [-] RBAC:
  Authorization granted from (pid=7033) inner
  /opt/stack/keystone/keystone/common/controller.py:151
  2014-06-26 17:37:05.228 INFO eventlet.wsgi.server [-] 20.10.22.245 - -
  [26/Jun/2014 17:37:05] GET /v2.0/tokens/revoked HTTP/1.1 200 815
  0.277525
 
  --
  Thanks  Regards,
Yogesh Prasad.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks  Regards,
   Yogesh Prasad.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [Heat] Ceilometer aware people, please advise us on processing notifications..

2014-06-26 Thread Zane Bitter

On 23/06/14 19:25, Clint Byrum wrote:

Hello! I would like to turn your attention to this specification draft
that I've written:

https://review.openstack.org/#/c/100012/1/specs/convergence-continuous-observer.rst

Angus has suggested that perhaps Ceilometer is a better place to handle
this. Can you please comment on the review, or can we have a brief
mailing list discussion about how best to filter notifications?

Basically in Heat when a user boots an instance, we would like to act as
soon as it is active, and not have to poll the nova API to know when
that is. Angus has suggested that perhaps we can just tell ceilometer to
hit Heat with a web hook when that happens.


I'm all in favour of having Ceilometer filter the firehose for us if we 
can :)


Webhooks would seem to add a lot of overhead though (set up + tear down 
a connection for every notification), that could perhaps be avoided by 
using a message bus? Given that both setting up and receiving these 
notifications would be admin-only operations, is there any benefit to 
handling them through a webhook API rather than through oslo.messaging?


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sergey Kraynev for heat-core

2014-06-26 Thread Randall Burt
On Jun 26, 2014, at 5:08 PM, Steve Baker sba...@redhat.com wrote:

 I'd like to nominate Sergey Kraynev for heat-core. His reviews are
 valuable and prolific, and his commits have shown a sound understanding
 of heat internals.
 
 http://stackalytics.com/report/contribution/heat-group/60

+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [Heat] Ceilometer aware people, please advise us on processing notifications..

2014-06-26 Thread Randall Burt
On Jun 26, 2014, at 5:25 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 23/06/14 19:25, Clint Byrum wrote:
 Hello! I would like to turn your attention to this specification draft
 that I've written:
 
 https://review.openstack.org/#/c/100012/1/specs/convergence-continuous-observer.rst
 
 Angus has suggested that perhaps Ceilometer is a better place to handle
 this. Can you please comment on the review, or can we have a brief
 mailing list discussion about how best to filter notifications?
 
 Basically in Heat when a user boots an instance, we would like to act as
 soon as it is active, and not have to poll the nova API to know when
 that is. Angus has suggested that perhaps we can just tell ceilometer to
 hit Heat with a web hook when that happens.
 
 I'm all in favour of having Ceilometer filter the firehose for us if we can :)
 
 Webhooks would seem to add a lot of overhead though (set up + tear down a 
 connection for every notification), that could perhaps be avoided by using a 
 message bus? Given that both setting up and receiving these notifications 
 would be admin-only operations, is there any benefit to handling them through 
 a webhook API rather than through oslo.messaging?
 
 cheers,
 Zane.

In larger OpenStack deployments, the different services probably don't share 
the same message bus. While I certainly agree oslo.messaging and/or 
oslo.notifications should be an option (and probably the default one at that), 
I think there should still be an option to use ceilometer or some other 
notification mechanism. As long as its pluggable, I don't think anyone would be 
too fussed.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Neutron IPv6 in Icehouse and further

2014-06-26 Thread Maksym Lobur
Hi Folks,

Could you please tell what is the current state of IPv6 in Neutron? Does it
have DHCPv6 working?

What is the best point to start hacking from? Devstack stable/icehouse or
maybe some tag? Are there any docs / raw deployment guides?
I see some patches not landed yet [1] ... I assume it won't work without
them, right?

Somehow I can't open any of the code reviews from the [2] (Not Found)

[1]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:%255E.*%255Cipv6.*,n,z
[2] https://wiki.openstack.org/wiki/Neutron/IPv6

Best regards,
Max Lobur,
Python Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron IPv6 in Icehouse and further

2014-06-26 Thread Martinx - ジェームズ
Hi! I'm waiting for that too...

Currently, I'm running IceHouse with static IPv6 address, with the topology
VLAN Provider Networks and, to make it easier, I'm counting on the
following blueprint:

https://blueprints.launchpad.net/neutron/+spec/ipv6-provider-nets-slaac

...but, I'm not sure if it will be enough to enable basic IPv6 support
(without using Neutron as Instance's default gateway)...

Cheers!
Thiago


On 26 June 2014 19:35, Maksym Lobur mlo...@mirantis.com wrote:

 Hi Folks,

 Could you please tell what is the current state of IPv6 in Neutron? Does
 it have DHCPv6 working?

 What is the best point to start hacking from? Devstack stable/icehouse or
 maybe some tag? Are there any docs / raw deployment guides?
 I see some patches not landed yet [1] ... I assume it won't work without
 them, right?

 Somehow I can't open any of the code reviews from the [2] (Not Found)

 [1]
 https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:%255E.*%255Cipv6.*,n,z
 [2] https://wiki.openstack.org/wiki/Neutron/IPv6

  Best regards,
 Max Lobur,
 Python Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Decoupling backend drivers

2014-06-26 Thread Fei Long Wang
+1 for the decoupling, since we're running into an issue when using
mysql as the pool.

On 27/06/14 03:10, Kurt Griffiths wrote:
 Crew, I'd like to propose the following:

  1. Decouple pool management from data storage (two separate drivers)
  2. Keep pool management driver for sqla, but drop the sqla data
 storage driver
  3. Provide a non-AGPL alternative to MongoDB that has feature parity
 and is at least as performant

 Decoupling will make configuration less confusing, while allowing us
 to maintain drivers separately and give us the flexibility to choose
 the best tool for the job (BTFJ). Once that work is done, we can drop
 support for sqla  as a message store backend, since it isn't a viable
 non-AGPL alternative to MongoDB. Instead, we can look into some other
 backends that offer a good mix of durability and performance. 

 What does everything think about this strategy?

 --
 Kurt Griffiths (kgriffs)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (???)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Python 3.3 Gate is Passing!

2014-06-26 Thread Fei Long Wang
Nataliia, good job. Glad to see we can pass the py33 gate.

On 27/06/14 03:40, Kurt Griffiths wrote:
 Hi everyone, I just wanted to to congratulate Nataliia on making
 Marconi one of the first OS project to pass the py33 gate!

 https://pbs.twimg.com/media/BrEQrZiCMAAbfEX.png:large

 Now, let's make that gate voting. :D

 ---
 Kurt Griffiths (kgriffs)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (???)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on Tu, 24th of June

2014-06-26 Thread Dmitry Borodaenko
I've updated the bug squashing stats to the current numbers, and we still
have the same trend that I noted on Tuesday. Overall number of bugs is
going down, but number of high priority bugs is growing:

New18 7Incomplete23 1Critical/High for 5.1141 6Critical/High for 5.1,
Confirmed Triaged83 -1Medium/Low/Undefined for 5.1, Confirmed/Triaged222 -6In
progress69 -5Customer-found24 -1Confirmed/Triaged/In progress for
5.1374 -12Total
open for 5.1415 -4
When triaging bugs, please make sure you carefully follow the bug
priorities. Base guideline for OpenStack projects:
https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29

Converted to Fuel context:
Critical = can't deploy anything and there's no trivial workaround; data
loss; or security vulnerability
High = specific configurations or components are unusable and there's no
workaround; or everything is broken but there's a workaround
Medium = specific configuration or component is working incorrectly; or is
completely unusable but there's a workaround
Low = minor feature is broken and can be fixed with a trivial workaround;
or a cosmetic defect
Wishlist = not a bug, should be either converted to blueprint and closed as
Invalid, or closed as Won't Fix

Thanks,
-DmitryB



On Tue, Jun 24, 2014 at 9:07 PM, Dmitry Borodaenko dborodae...@mirantis.com
 wrote:

 Updated numbers from the end of the day:


 start delta from 2014-06-17mid-day delta from startend delta from startdelta
 from 2014-06-17 New175 17011 -6-1Incomplete 25-621 -422-3 -9Critical/High
 for 5.1

 140
 135-5 28Critical/High for 5.1, Confirmed Triaged92
 87-584 -8
 Medium/Low/Undefined for 5.1, Confirmed/Triaged238
 230-8 228-10
 In progress 67
 73 6747
 Customer-found27
 25-225 -2
 Confirmed/Triaged/In progress for 5.1

 392
 386 -618Total open for 5.1 43928428 -11419-20 8
 As you can see, we've made a good progress today (unlike last week, we've
 reduced all numbers except bugs in progress, looks like we need to be more
 focused and do better at pushing patches through code review). However,
 comparing with numbers at the end of bug squashing day last week, we're
 still in the red. We're getting better at triaging bugs, but we're still
 finding more High/Critical bugs than we're able to fix. I feel that some
 bug priority inflation is taking place (total number of bugs has grown by
 8, number of High/Critical bugs, by 28), we need to be more strict with
 applying bug priority guidelines
 https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29
 .

 Thank you all for participating, let's do even better next week!
 -DmitryB



 On Tue, Jun 24, 2014 at 12:41 PM, Dmitry Borodaenko 
 dborodae...@mirantis.com wrote:

 Mid-day numbers update:


 start delta from 2014-06-17mid-day delta from startend delta from startdelta
 from 2014-06-17 New175 17017 05Incomplete 25-621 -421-4 -10Critical/High
 for 5.1

 140
 140
 33 Critical/High for 5.1, Confirmed Triaged92
 87-587 -5
 Medium/Low/Undefined for 5.1, Confirmed/Triaged238
 230-8 230-8
 In progress 67
 73 6736
 Customer-found27
 25-225 -2
 Confirmed/Triaged/In progress for 5.1

 392
 392
 24Total open for 5.1439 28428-11 428-1117
 Spreadsheet:

 https://docs.google.com/a/mirantis.com/spreadsheets/d/10mUeRwOplnmoe_RFkrUSeVEw-__ZMU2nq23BOY-gzYs/edit#gid=1683970476

 --
 Dmitry Borodaenko




 --
 Dmitry Borodaenko




-- 
Dmitry Borodaenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [heat] Sergey Kraynev for heat-core

2014-06-26 Thread Huangtianhua
+1,congratulations:)

-邮件原件-
发件人: Steve Baker [mailto:sba...@redhat.com] 
发送时间: 2014年6月27日 6:08
收件人: OpenStack Development Mailing List
主题: [openstack-dev] [heat] Sergey Kraynev for heat-core

I'd like to nominate Sergey Kraynev for heat-core. His reviews are valuable and 
prolific, and his commits have shown a sound understanding of heat internals.

http://stackalytics.com/report/contribution/heat-group/60

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance of security group

2014-06-26 Thread joehuang
Interesting idea to optimize the performance.

Not only security group rule will leads to fanout message load, we need to 
review and check to see if all fanout usegae in Neutron could be optimized.

For example, L2 population:

self.fanout_cast(context,
  self.make_msg(method, fdb_entries=fdb_entries),
  topic=self.topic_l2pop_update)

it would be better to use network+l2pop_update as the topic, and only the 
agents which there are VMs running on it will consume the message.

Best Regards
Chaoyi Huang( Joe Huang)

-邮件原件-
发件人: Miguel Angel Ajo Pelayo [mailto:mangel...@redhat.com] 
发送时间: 2014年6月27日 1:33
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [neutron]Performance of security group

- Original Message -
 @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
 
 Another idea:
 What about creating a RPC topic per security group (quid of the RPC 
 topic
 scalability) on which an agent subscribes if one of its ports is 
 associated to the security group?
 
 Regards,
 Édouard.
 
 


Hmm, Interesting,

@Nachi, I'm not sure I fully understood:


SG_LIST [ SG1, SG2]
SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
port[SG_ID1, SG_ID2], port2 , port3


Probably we may need to include also the SG_IP_LIST = [SG_IP1, SG_IP2] ...


and let the agent do all the combination work.

Something like this could make sense?

Security_Groups = {SG1:{IPs:[],RULES:[],
   SG2:{IPs:[],RULES:[]} 
  }

Ports = {Port1:[SG1, SG2], Port2: [SG1]  }


@Edouard, actually I like the idea of having the agent subscribed
to security groups they have ports on... That would remove the need to include
all the security groups information on every call...

But would need another call to get the full information of a set of security 
groups
at start/resync if we don't already have any. 


 
 On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang  ayshihanzh...@126.com  wrote:
 
 
 
 hi Miguel Ángel,
 I am very agree with you about the following point:
   * physical implementation on the hosts (ipsets, nftables, ... )
 --this can reduce the load of compute node.
   * rpc communication mechanisms.
 -- this can reduce the load of neutron server
 can you help me to review my BP specs?
 
 
 
 
 
 
 
 At 2014-06-19 10:11:34, Miguel Angel Ajo Pelayo  mangel...@redhat.com 
 wrote:
 
   Hi it's a very interesting topic, I was getting ready to raise
 the same concerns about our security groups implementation, shihanzhang
 thank you for starting this topic.
 
   Not only at low level where (with our default security group
 rules -allow all incoming from 'default' sg- the iptable rules
 will grow in ~X^2 for a tenant, and, the security_group_rules_for_devices
 rpc call from ovs-agent to neutron-server grows to message sizes of 100MB,
 generating serious scalability issues or timeouts/retries that
 totally break neutron service.
 
(example trace of that RPC call with a few instances
  http://www.fpaste.org/104401/14008522/ )
 
   I believe that we also need to review the RPC calling mechanism
 for the OVS agent here, there are several possible approaches to breaking
 down (or/and CIDR compressing) the information we return via this api call.
 
 
So we have to look at two things here:
 
   * physical implementation on the hosts (ipsets, nftables, ... )
   * rpc communication mechanisms.
 
Best regards,
 Miguel Ángel.
 
 - Mensaje original -
 
  Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
  It also based on the rule set mechanism.
  The issue in that proposition, it's only stable since the begin of the
  year
  and on Linux kernel 3.13.
  But there lot of pros I don't list here (leverage iptables limitation,
  efficient update rule, rule set, standardization of netfilter
  commands...).
 
  Édouard.
 
  On Thu, Jun 19, 2014 at 8:25 AM, henry hly  henry4...@gmail.com  wrote:
 
   we have done some tests, but have different result: the performance is
   nearly
   the same for empty and 5k rules in iptable, but huge gap between
   enable/disable iptable hook on linux bridge
  
 
   On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang  ayshihanzh...@126.com 
   wrote:
  
 
Now I have not get accurate test data, but I can confirm the following
points:
   
  
1. In compute node, the iptable's chain of a VM is liner, iptable
filter
it
one by one, if a VM in default security group and this default
security
group have many members, but ipset chain is set, the time ipset filter
one
and many member is not much difference.
   
  
2. when the iptable rule is very large, the probability of failure
that
iptable-save save the iptable rule is very large.
   
  
 
At 2014-06-19 10:55:56, Kevin Benton  blak...@gmail.com  wrote:
   
  
 
 This sounds like a good idea to handle some of the performance
 issues
 until
 the ovs firewall can 

Re: [openstack-dev] [nova] Two questions about 'backup' API

2014-06-26 Thread wu jiang
Hi Vish, thanks for your reply.

About Q1, I mean that Nova doesn't have extra processions/works for
'daily'/'weekly' than other backup_types like '123'/'test'.
The 'daily'  'weekly' don't have unique places in the API than any other
else.

But we gave them as examples in code comments especially in novaclient.

A few users asked me why their instances were not backup-ed automatically,
they thought we have a timing task to do this if the 'backup_type' equals
to 'daily'/'weekly' because we prompt them to use it..
Therefore, it's useless and inclined to make confusion for this API IMO. No
need to show them in code comments  novaclient.

P.S. So maybe 'backup_name'/'backup_tag' is a better name, but we can't
modify the API for compatibility..


Thanks.

WingWJ


On Fri, Jun 27, 2014 at 5:20 AM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 On Jun 26, 2014, at 5:07 AM, wu jiang win...@gmail.com wrote:

  Hi all,
 
  I tested the 'backup' API recently and got two questions about it:
 
  1. Why 'daily'  'weekly' appear in code comments  novaclient about
 'backup_type' parameter?
 
  The 'backup_type' parameter is only a tag for this backup(image).
  And there isn't corresponding validation for 'backup_type' about these
 two types.
 
  Moreover, there is also no periodic_task for 'backup' in compute host.
  (It's fair to leave the choice to other third-parts system)
 
  So, why we leave 'daily | weekly' example in code comments and
 novaclient?
  IMO it may lead confusion that Nova will do more actions for
 'daily|weekly' backup request.

 The tag affects the cleanup of old copies, so if you do a tag of ‘weekly’
 and
 the rotation is 3, it will insure you only have 3 copies that are tagged
 weekly.
 You could also have 3 copies of the daily tag as well.

 
  2. Is it necessary to backup instance when 'rotation' is equal to 0?
 
  Let's look at related codes in nova/compute/manager.py:
  # def backup_instance(self, context, image_id, instance,
 backup_type, rotation):
  #
  #self._do_snapshot_instance(context, image_id, instance,
 rotation)
  #self._rotate_backups(context, instance, backup_type, rotation)
 
  I knew Nova will delete all backup images according the 'backup_type'
 parameter when 'rotation' equals to 0.
 
  But according the logic above, Nova will generate one new backup in
 _do_snapshot_instance(), and delete it in _rotate_backups()..
 
  It's weird to snapshot a useless backup firstly IMO.
  We need to add one new branch here: if 'rotation' is equal to 0, no need
 to backup, just rotate it.

 That makes sense I suppose.

 Vish

 
 
  So, what's your opinions? Look forward to your suggestion.
  Thanks.
 
  WingWJ
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-26 Thread Wuhongning


From: Zang MingJie [zealot0...@gmail.com]
Sent: Thursday, June 26, 2014 6:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut

On Wed, Jun 25, 2014 at 10:29 PM, McCann, Jack jack.mcc...@hp.com wrote:
  If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?

 It is technically possible to implement default SNAT at the compute node.

 One approach would be to use a single IP address per compute node as a
 default SNAT address shared by all VMs on that compute node.  While this
 optimizes for number of external IPs consumed per compute node, the downside
 is having VMs from different tenants sharing the same default SNAT IP address
 and conntrack table.  That downside may be acceptable for some deployments,
 but it is not acceptable in others.

 To resolve the problem, we are using double-SNAT,

 first, set up one namespace for each router, SNAT tenant ip ranges to
 a separate range, say 169.254.255.0/24

 then, SNAT from 169.254.255.0/24 to public network.

 We are already using this method, and saved tons of ips in our
 deployment, only one public ip is required per router agent

Functionally it could works, but break the existing normal OAM pattern, which 
expecting VMs from one tenant share a public IP, but share no IP with other 
tenant. As I know, at least some customer don't accept this way, they think VMs 
in different hosts appear as different public IP is very strange.

In fact I severely doubt the value of N-S distributing in a real commercialized 
production environment, including FIP. There are many things that traditional 
N-S central nodes need to control: security, auditing, logging, and so on, it 
is not the simple forwarding. We need a tradeoff between performance and policy 
control model:

1. N-S traffic is usually much less than W-E traffic, do we really need 
distribute N-S traffic besides W-E traffic?
2. With NFV progress like intel DPDK, we can build very cost-effective service 
application on commodity x86 server (simple SNAT with 10Gbps/s per core at 
average Internet packet length)



 Another approach would be to use a single IP address per router per compute
 node.  This avoids the multi-tenant issue mentioned above, at the cost of
 consuming more IP addresses, potentially one default SNAT IP address for each
 VM on the compute server (which is the case when every VM on the compute node
 is from a different tenant and/or using a different router).  At that point
 you might as well give each VM a floating IP.

 Hence the approach taken with the initial DVR implementation is to keep
 default SNAT as a centralized service.

 - Jack

 -Original Message-
 From: Zang MingJie [mailto:zealot0...@gmail.com]
 Sent: Wednesday, June 25, 2014 6:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut

 On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com 
 wrote:
  Hi,
  for each compute node to have SNAT to Internet, I think we have the
  drawbacks:
  1. SNAT is done in router, so each router will have to consume one public 
  IP
  on each compute node, which is money.

 SNAT can save more ips than wasted on floating ips

  2. for each compute node to go out to Internet, the compute node will have
  one more NIC, which connect to physical switch, which is money too
 

 Floating ip also need a public NIC on br-ex. Also we can use a
 separate vlan to handle the network, so this is not a problem

  So personally, I like the design:
   floating IPs and 1:N SNAT still use current network nodes, which will have
  HA solution enabled and we can have many l3 agents to host routers. but
  normal east/west traffic across compute nodes can use DVR.

 BTW, does HA implementation still active ? I haven't seen it has been
 touched for a while

 
  yong sheng gong
 
 
  On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  In current DVR design, SNAT is north/south direction, but packets have
  to go west/east through the network node. If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?
 
  SNAT versus floating ips, can save tons of public ips, in trade of
  introducing a single failure point, and limiting the bandwidth of the
  network node. If the SNAT performance problem can be solved, I'll
  encourage people to use SNAT over floating ips. unless the VM is
  serving a public service
 
  --
  Zang MingJie
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-26 Thread Yongsheng Gong
I have reported it on neutron project
https://bugs.launchpad.net/neutron/+bug/1334926


On Fri, Jun 27, 2014 at 5:07 AM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 I missed that going in, but it appears that clean_conntrack is not done on
 disassociate, just during migration. It sounds like we should remove the
 explicit call in migrate, and just always call it from remove_floating_ip.

 Vish

 On Jun 26, 2014, at 1:48 PM, Brian Haley brian.ha...@hp.com wrote:

  Signed PGP part
  I believe nova-network does this by using 'conntrack -D -r $fixed_ip'
 when the
  floating IP goes away (search for clean_conntrack), Neutron doesn't when
 it
  removes the floating IP.  Seems like it's possible to close most of that
 gap
  in the l3-agent - when it removes the IP from it's qg- interface it can
 do a
  similar operation.
 
  -Brian
 
  On 06/26/2014 03:36 PM, Vishvananda Ishaya wrote:
   I believe this will affect nova-network as well. We probably should use
   something like the linux cutter utility to kill any ongoing connections
   after we remove the nat rule.
  
   Vish
  
   On Jun 25, 2014, at 8:18 PM, Xurong Yang ido...@gmail.com wrote:
  
   Hi folks,
  
   After we create an SSH connection to a VM via its floating ip, even
   though we have removed the floating ip association, we can still
 access
   the VM via that connection. Namely, SSH is not disconnected when the
   floating ip is not valid. Any good solution about this security issue?
  
   Thanks Xurong Yang ___
   OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___ OpenStack-dev mailing
 list
OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-26 Thread stanzgy
I have filed this bug on nova
https://bugs.launchpad.net/nova/+bug/1334938


On Fri, Jun 27, 2014 at 10:19 AM, Yongsheng Gong gong...@unitedstack.com
wrote:

 I have reported it on neutron project
 https://bugs.launchpad.net/neutron/+bug/1334926


 On Fri, Jun 27, 2014 at 5:07 AM, Vishvananda Ishaya vishvana...@gmail.com
  wrote:

 I missed that going in, but it appears that clean_conntrack is not done on
 disassociate, just during migration. It sounds like we should remove the
 explicit call in migrate, and just always call it from remove_floating_ip.

 Vish

 On Jun 26, 2014, at 1:48 PM, Brian Haley brian.ha...@hp.com wrote:

  Signed PGP part
  I believe nova-network does this by using 'conntrack -D -r $fixed_ip'
 when the
  floating IP goes away (search for clean_conntrack), Neutron doesn't
 when it
  removes the floating IP.  Seems like it's possible to close most of
 that gap
  in the l3-agent - when it removes the IP from it's qg- interface it can
 do a
  similar operation.
 
  -Brian
 
  On 06/26/2014 03:36 PM, Vishvananda Ishaya wrote:
   I believe this will affect nova-network as well. We probably should
 use
   something like the linux cutter utility to kill any ongoing
 connections
   after we remove the nat rule.
  
   Vish
  
   On Jun 25, 2014, at 8:18 PM, Xurong Yang ido...@gmail.com wrote:
  
   Hi folks,
  
   After we create an SSH connection to a VM via its floating ip, even
   though we have removed the floating ip association, we can still
 access
   the VM via that connection. Namely, SSH is not disconnected when the
   floating ip is not valid. Any good solution about this security
 issue?
  
   Thanks Xurong Yang ___
   OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___ OpenStack-dev mailing
 list
OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,

Gengyuan Zhang
NetEase Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev