Re: [openstack-dev] [all][release] ACL for library-release team for release:managed projects

2015-08-21 Thread Ben Swartzlander



On 08/21/2015 06:25 PM, Davanum Srinivas wrote:

Folks,

In the governance repo a number of libraries are marked with 
release:managed tag:

http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

However, some of these libraries do not have appropriate ACL in the 
project:config repo:

http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack

For example a quick scan shows that the following repos are marked 
release:managed and do not have the right ACL:

python-kiteclient
python-designateclient
python-ironic-inspector-client
python-manilaclient
os-client-config
automaton
python-zaqarclient

So PTL's, either please fix the governance repo to remove 
release:managed or add appropriate ACL in the project-config repo as 
documented in:

http://docs.openstack.org/infra/manual/creators.html#creation-of-tags




I'm an offender here (python-manilaclient). I'm happy to make this 
change but how do I find out about the library-release group and the 
process for pushing tags? Currently I just do it, and I'm happy to move 
to a more managed method but I'd like to know how library releases are 
managed before I make this change.


thanks,
-Ben Swartzlander



Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PTL/TC candidate workflow proposal for next elections

2015-08-21 Thread Joshua Hesketh
I'm struggling to think of a way this might help enable discussions between
nominees and voters about their platforms. Since the tooling will send out
the nomination announcements the only real noise that is reduced is the
nomination confirmed type emails.

While I think this sounds really neat, I'm not convinced that it'll
actually reduce noise on the mailing list if that was the goal. I realise
the primary goal is to help the election officials, but perhaps we can
achieve both of these by a separate mailing list for both nomination
announcements and also platform discussions? This could be a first step and
then once we have the tooling to confirm a nominees validity we could
automate that first announcement email still.

Just a thought anyway.

Cheers,
Josh

On Sat, Aug 22, 2015 at 5:44 AM, Anita Kuno ante...@anteaya.info wrote:

 On 08/21/2015 03:37 PM, Jeremy Stanley wrote:
  On 2015-08-21 14:32:50 -0400 (-0400), Anita Kuno wrote:
  Personally I would recommend that the election officials have
  verification permissions on the proposed repo and the automation
  step is skipped to begin with as a way of expediting the repo
  creation. Getting the workflow in place in enough time that
  potential candidates can familiarize themselves with the change,
  is of primary importance I feel. Automation can happen after the
  workflow is in place.
 
  Agreed, I'm just curious what our options actually are for
  automating the confirmation research currently performed. It's
  certainly not a prerequisite for using the new repo/workflow in a
  manually-driven capacity in the meantime.
 

 Fair enough. I don't want to answer the question myself as I feel it's
 best for the response to come from current election officials.

 Thanks Jeremy,
 Anita.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] modifying the 'is it packaged' test

2015-08-21 Thread Robert Collins
On 22 August 2015 at 11:50, Dave Walker em...@daviey.com wrote:
 On 22 August 2015 at 00:04, Matthew Thode prometheanf...@gentoo.org wrote:
 On 08/21/2015 05:59 PM, Robert Collins wrote:
 On 22 August 2015 at 10:57, Matthew Thode prometheanf...@gentoo.org wrote:
 Packaging for us is fairly easy, but it is annoying to have to add 5-6
 deps each release, (which means we are adding cruft over time).

 We're adding functionality by bringing in existing implementations.
 Surely thats better than reinventing *everything* ?

 -Rob

 totally, more of a minor annoyance :P

 A strong reason that requirements was created was to give distros a
 voice and avoid incompatible versions, which was more of a problem for
 distros than it was for each different service at that point.

So, incompatible versions with which/what packages? We guarantee
co-installability within OpenStack, but if e.g. RHEL and Ubuntu are
mutually incompatible in their versions of some package, what should
OpenStack do? That seems like a problem that is intrinsically
unsolvable for OpenStack as its a consistency issue between a large
and growing number of independent groups - and inevitably, in that
situation, one loses. So they have to solve it locally - for Ubuntu
via click, for RHEL via software collections. So if its locally
solvable, why should OpenStack consider it?

On the voice: please do get more distributors reviewing requirements changes!

 I'm not sure that a requirement has ever been not included because it
 *wasn't* packaged, but perhaps because it *couldn't* be packaged.  Is
 there an example that has caused you to raise this?

I'm a new core there and trying to make sure what we document and what
we want to happen are the same thing. 'Apply caution' doesn't mean
much to me other than to feel scared! So I'm trying to elucidate what
should be there instead.

 The is-it-packaged-test was added at a time where large changes were
 happening in OpenStack right up to the (release) wire and cause scary
 changes for distros that were tracking the release.  Now, Feature
 development has become more mature with the scary stuff being front
 loaded, I'm not quite sure this is such a problem.

 The release schedule used to document a DepFreeze[0] to avoid nasty
 surprises for distros, which used to be at the same point of
 FeatureFreeze[1].  This reference seems to have been removed from the
 last few cycles, but I would suggest that it could be re-added.

As more components of OpenStack are moving away from big-bang
releases, DepFreeze would make less and less sense to me. The
integrated release is on the way out.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL keys saving

2015-08-21 Thread Adam Heczko
Hi Evgeniy,
what you've proposed is all right, although it adds some overhead for
certificate provisioning.
In fact, to do it right we should probably define REST API for provisioning
certificates.
I'm rather for simplified approach, consisting of Shell / Puppet scripts
for certificate upload/management.

Regards,

A.


On Fri, Aug 21, 2015 at 12:20 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Stanislaw,

 I agree that user's certificates mustn't be saved in Nailgun database, in
 cluster attributes,
 in this case it can be seen in all the logs, which is terrible security
 problem,
 and we already have a place where we keep auto-generated certificates and
 ssh-keys, and those are copied to specific nodes by Astute.

 So UI should send the file to specific URL, Nginx should be configured to
 send auth
 request to backend, after request is authorised, Nginx should save the
 file into predefined
 directory, the same which we use for autogenerated certificates, in this
 case such
 tool as OSTF can take certificates from a single place.

 Thanks,

 On Fri, Aug 21, 2015 at 12:10 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi folks.

 Today I want to discuss the way we save SSL keys for Fuel environments.
 As you maybe know we have 2 ways to get a key:
 a. Generate it by Fuel (self-signed certificate will be created in this
 case). In this case we will generate private key, csr and crt in a
 pre-deployment hook on master node and then copy keypair to the nodes which
 needed it.

 b. Get a pre-generated keypair from user. In this case user should create
 keypair by himself and then upload it through Fuel UI settings tab. In this
 case keypair will be saved in nailgun database and then will serialized
 into astute.yaml on cluster nodes, pulled from it by puppet and saved into
 a file.

 Second way has some flaws:
 1. We already have some keys for nodes and we store them on master node.
 Store keys in different places is bad, cause:
 1.1. User experience - user should remember that in some cases keys will
 be store in FS and in some other cases - in DB.
 1.2. It brings problems for implementation in other different places -
 for example, we need to get certificate for properly run OSTF tests and now
 we should implement two different ways to deliver that certificate to OSTF
 container. The same for fuel-cli - we should somehow get certificate from
 DB and place it in FS to use it.
 2. astute.yaml is similar for all nodes. Not all of nodes needs to have
 private key, but now we cannot control this.
 3. If keypair data serializes into astute.yaml it means than that data
 automatically will be fetched when diagnostic snapshot will created. So in
 some cases in can lead to security vulnerability, or we will must to write
 another crutch to cut it out of diagnostic snapshot.


 So I propose to get rid of saving keypair in nailgun database and
 implement a way to always saving it to local FS on master node. We need to
 implement next items:

 - Change UI logic that saving keypair into DB to logic that will save it
 to local FS
 - Implement according fixes in fuel-library

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] 2nd prc hackathon event finised, need your help to review patch

2015-08-21 Thread Qiao, Liyong
Hi folks

We just finished 2nd prc hackathon this Friday.
For nova project, we finially have 31 patch/bug submitted/updated, we finally 
get out a
etherpad link to track all bugs/patches, can you kindly help to review these 
patches on link

https://etherpad.openstack.org/p/hackathon2_nova_list

BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-21 Thread Thierry Carrez
Robert Collins wrote:
 On 19 August 2015 at 21:19, Thierry Carrez thie...@openstack.org wrote:
 Processing:
 1) determine the revisions we need to generate release notes for. By
 default generate notes for the current minor release. (E.g. if the
 tree version is 1.2.3.dev4 we would generate release notes for 1.2.0,
 1.2.1, 1.2.2, 1.2.3[which dev4 is leading up to]).

 How would that work in a post-versioned world ? What would you generate
 if the tree version is 1.2.3.post12 ?
 
 1.2.3 is still the version, not that we can use post versions at all
 with pbr. (Short story - we can't because we used them wrongly and we
 haven't had nearly enough time to flush out remaining instances in the
 wild).

Could you expand on that ? Feels like I'm missing a piece of the puzzle.
Let's say we just tagged 1.2.3. The next commit is a security fix (for
which we don't know the OSSA number yet). The one after that is the
release-notes/???.yaml change which specifies the OSSA number in the
release notes. At this point we still have no idea what the next version
number will look like, since there is no tag yet. What should the
filename for ???.yaml be in that case ? If you workaround that by
referencing the OSSA in the changes/$ChangeID.yaml file instead, what
(if any) work-in-progress .md file does that end up generating ?

If we want to serve partial release notes for people consuming the
stable branch between tag points, for repositories using post-versioning
we have to produce some next.md or in-progress.md since we can't
guess what the next version will actually be.

 [...]
 If we want to put release notes in sdists, we can have pbr do this,
 otherwise it could just be built entirely separately.

 I think we need to put release notes in sdists, so that people consuming
 stable branches from a random commit can still get work-in-progress
 releasenotes for the upcoming version.
 
 Those two things are disconnected. Consuming a random commit doesn't
 imply sdist - nor does it preclude it.
 
 We don't *currently* generate release notes in sdists. Whats the
 driver for adding it? [perhaps as a use case so we can flush out
 hidden assumptions we have]

The whole idea behind moving away from coordinated stable branch
releases is to let people consume the stable branch at any point in
time. The original plan was to stop releasing (tagging) stable branches
completely. People replied they still needed release notes so that
they have upgrade warnings and other bits of information about what they
are getting. It's inconvenient to continue using wiki pages to achieve
that, so we moved to in-git maintenance of release notes. And rather
than force consumers to generate them from the tree, they can
conveniently find them in any stable source code tarball we publish
(including intermediary ones like $project-stable-kilo.tar.gz).

Since then, replying to another concern about common downstream
reference points, we moved to tagging everything, then replying to
Clark's pollution remark, to tag from time to time. That doesn't
remove the need to *conveniently* ship the best release notes we can
with every commit. Including them in every code tarball (and relying on
well-known python sdist commands to generate them for those consuming
the git tree directly) sounded like the most accessible way to do it,
which the related thread on the Ops ML confirmed. But then I'm (and
maybe they are) still open to alternative suggestions...

Hope this clarifies,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][dvr] DVR L2 agent is removing the br-int OVS flows

2015-08-21 Thread Korzeniewski, Artur
Hi all,
After merging the Graceful ovs-agent restart[1] (great work BTW!), I'm seeing 
in DVR L2 agent code place where flows on br-int is removed in old style:

File /neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py
200 def setup_dvr_flows_on_integ_br(self):
201 '''Setup up initial dvr flows into br-int'''
202 if not self.in_distributed_mode():
203 return
204
205 LOG.info(_LI(L2 Agent operating in DVR Mode with MAC %s),
206  self.dvr_mac_address)
207 # Remove existing flows in integration bridge
208 self.int_br.delete_flows()

This is kind of bummer when seeing the effort to preserve the flows in [1].
This should not affect VM network access, since the br-tun is configured 
properly and br-int is in learning mode.

Should this be fixed in Liberty cycle?

This is something similar to submitted bug: 
https://bugs.launchpad.net/neutron/+bug/1436156

[1] https://bugs.launchpad.net/neutron/+bug/1383674

Regards,
Artur Korzeniewski

Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173, 80-298 Gdansk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL keys saving

2015-08-21 Thread Evgeniy L
Hi Adam,

I'm not sure if understand you correctly, what do you mean by overhead for
certificate provisioning? We already have all the mechanisms in place in
order
to provision certificates, the point is currently with user's certificates
we work in
absolutely different way and store them in absolutely different place. And
this
way leads to huge problems.

Thanks,

On Fri, Aug 21, 2015 at 1:33 PM, Adam Heczko ahec...@mirantis.com wrote:

 Hi Evgeniy,
 what you've proposed is all right, although it adds some overhead for
 certificate provisioning.
 In fact, to do it right we should probably define REST API for
 provisioning certificates.
 I'm rather for simplified approach, consisting of Shell / Puppet scripts
 for certificate upload/management.

 Regards,

 A.


 On Fri, Aug 21, 2015 at 12:20 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Stanislaw,

 I agree that user's certificates mustn't be saved in Nailgun database, in
 cluster attributes,
 in this case it can be seen in all the logs, which is terrible security
 problem,
 and we already have a place where we keep auto-generated certificates and
 ssh-keys, and those are copied to specific nodes by Astute.

 So UI should send the file to specific URL, Nginx should be configured to
 send auth
 request to backend, after request is authorised, Nginx should save the
 file into predefined
 directory, the same which we use for autogenerated certificates, in this
 case such
 tool as OSTF can take certificates from a single place.

 Thanks,

 On Fri, Aug 21, 2015 at 12:10 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi folks.

 Today I want to discuss the way we save SSL keys for Fuel environments.
 As you maybe know we have 2 ways to get a key:
 a. Generate it by Fuel (self-signed certificate will be created in this
 case). In this case we will generate private key, csr and crt in a
 pre-deployment hook on master node and then copy keypair to the nodes which
 needed it.

 b. Get a pre-generated keypair from user. In this case user should
 create keypair by himself and then upload it through Fuel UI settings tab.
 In this case keypair will be saved in nailgun database and then will
 serialized into astute.yaml on cluster nodes, pulled from it by puppet and
 saved into a file.

 Second way has some flaws:
 1. We already have some keys for nodes and we store them on master node.
 Store keys in different places is bad, cause:
 1.1. User experience - user should remember that in some cases keys will
 be store in FS and in some other cases - in DB.
 1.2. It brings problems for implementation in other different places -
 for example, we need to get certificate for properly run OSTF tests and now
 we should implement two different ways to deliver that certificate to OSTF
 container. The same for fuel-cli - we should somehow get certificate from
 DB and place it in FS to use it.
 2. astute.yaml is similar for all nodes. Not all of nodes needs to have
 private key, but now we cannot control this.
 3. If keypair data serializes into astute.yaml it means than that data
 automatically will be fetched when diagnostic snapshot will created. So in
 some cases in can lead to security vulnerability, or we will must to write
 another crutch to cut it out of diagnostic snapshot.


 So I propose to get rid of saving keypair in nailgun database and
 implement a way to always saving it to local FS on master node. We need to
 implement next items:

 - Change UI logic that saving keypair into DB to logic that will save it
 to local FS
 - Implement according fixes in fuel-library


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Adam Heczko
 Security Engineer @ Mirantis Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL keys saving

2015-08-21 Thread Adam Heczko
Sorry in understood incorrectly - using HTTP/Web IMO usually makes kind of
overhead if designed from the beginning.
If there are some HTTP authentication/CSR request/key management mechanisms
already in place, of course there is no overhead.

Regards,

A.

On Fri, Aug 21, 2015 at 12:43 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Adam,

 I'm not sure if understand you correctly, what do you mean by overhead
 for
 certificate provisioning? We already have all the mechanisms in place in
 order
 to provision certificates, the point is currently with user's
 certificates we work in
 absolutely different way and store them in absolutely different place.
 And this
 way leads to huge problems.

 Thanks,

 On Fri, Aug 21, 2015 at 1:33 PM, Adam Heczko ahec...@mirantis.com wrote:

 Hi Evgeniy,
 what you've proposed is all right, although it adds some overhead for
 certificate provisioning.
 In fact, to do it right we should probably define REST API for
 provisioning certificates.
 I'm rather for simplified approach, consisting of Shell / Puppet scripts
 for certificate upload/management.

 Regards,

 A.


 On Fri, Aug 21, 2015 at 12:20 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Stanislaw,

 I agree that user's certificates mustn't be saved in Nailgun database,
 in cluster attributes,
 in this case it can be seen in all the logs, which is terrible security
 problem,
 and we already have a place where we keep auto-generated certificates and
 ssh-keys, and those are copied to specific nodes by Astute.

 So UI should send the file to specific URL, Nginx should be configured
 to send auth
 request to backend, after request is authorised, Nginx should save the
 file into predefined
 directory, the same which we use for autogenerated certificates, in this
 case such
 tool as OSTF can take certificates from a single place.

 Thanks,

 On Fri, Aug 21, 2015 at 12:10 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi folks.

 Today I want to discuss the way we save SSL keys for Fuel environments.
 As you maybe know we have 2 ways to get a key:
 a. Generate it by Fuel (self-signed certificate will be created in this
 case). In this case we will generate private key, csr and crt in a
 pre-deployment hook on master node and then copy keypair to the nodes which
 needed it.

 b. Get a pre-generated keypair from user. In this case user should
 create keypair by himself and then upload it through Fuel UI settings tab.
 In this case keypair will be saved in nailgun database and then will
 serialized into astute.yaml on cluster nodes, pulled from it by puppet and
 saved into a file.

 Second way has some flaws:
 1. We already have some keys for nodes and we store them on master
 node. Store keys in different places is bad, cause:
 1.1. User experience - user should remember that in some cases keys
 will be store in FS and in some other cases - in DB.
 1.2. It brings problems for implementation in other different places -
 for example, we need to get certificate for properly run OSTF tests and now
 we should implement two different ways to deliver that certificate to OSTF
 container. The same for fuel-cli - we should somehow get certificate from
 DB and place it in FS to use it.
 2. astute.yaml is similar for all nodes. Not all of nodes needs to have
 private key, but now we cannot control this.
 3. If keypair data serializes into astute.yaml it means than that data
 automatically will be fetched when diagnostic snapshot will created. So in
 some cases in can lead to security vulnerability, or we will must to write
 another crutch to cut it out of diagnostic snapshot.


 So I propose to get rid of saving keypair in nailgun database and
 implement a way to always saving it to local FS on master node. We need to
 implement next items:

 - Change UI logic that saving keypair into DB to logic that will save
 it to local FS
 - Implement according fixes in fuel-library


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Adam Heczko
 Security Engineer @ Mirantis Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-21 Thread Thierry Carrez
Kuvaja, Erno wrote:
 [...]
 We try to have python-glanceclient and glance_store including release notes 
 upon the release time. We use in tree doc/source/index.rst for ease of 
 access. This provides our release notes through: 
 docs.openstack.org/developer/python-glanceclient/ and you can easily follow 
 up stable branches via git: 
 https://github.com/openstack/python-glanceclient/blob/stable/kilo/doc/source/index.rst
 
 I've been trying to push mentality in to our community that last thing before 
 release, we merge release notes update and tag that. What comes to stable, I 
 think it's worth of adding release notes in the backport workflow.
 [...]
 It would be extremely great if we did not need to overcomplicate the workflow 
 with the release notes. They are not nuclear science and lets not try to make 
 it complicate enough to need a doctorate to understand how to a) add them b) 
 correct/fix them c) troubleshoot the generation _when_ something breaks in 
 that workflow.

The main issue with maintaining release notes as a document in tree is
that it requires a stable release manager to produce the release. One
of the goals of the change we are pushing here is to not require stable
release managers anymore, since nobody wants to do that job.

Robert's proposal solves the issue of merge conflicts, which allows to
distribute the role of curating release notes. It makes it a duty of the
backporter rather than a duty of the stable release manager. It also
lets us have best-effort intermediary release notes available for those
consuming between tagged commits.

However I agree that with the tag now and then approach we are
dangerously close to requiring stable release managers again (if only to
make the conscious choice to release). And if we do, directly curating a
release notes doc in the tree is simpler than relying on a directory
structure to produce them.

So I guess we are left with a choice:

- abandon the idea of not requiring stable branch release managers,
require stable branch liaisons in each project to tag stable branch
releases for their project from time to time, and ask them to curate a
release notes document in the tree before they do so

- use Robert's system to continuously produce release notes, and have
lightweight triggers for tags from time to time (after a CVE fix, at
someone's request, after a number of commits without a release, after a
period of time without a release) that do not formally require a stable
release manager or put additional stress on existing stable branch
liaisons.

I think I prefer the second option, because it has potential for
providing a better experience for people consuming stable branches
between tagged releases.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Keystone][mod_wsgi] A summary for the A/A HA issues had been fixed

2015-08-21 Thread Bogdan Dobrelya
On 21.08.2015 15:57, Bogdan Dobrelya wrote:
 Our deployment automation teams have been addressing the critical issues
 arised for HA A/A Keystone switched to the mod_wsgi and related
 backends' failure modes.
 We ran numerous rally tests against different configuration layouts for
 Apache2 MPM worker versus Keystone and HAProxy backend config versus
 deprecated eventlet case. Raw results [0] - no pretty view, sorry.
 
 We made a conclusion based on the tests results [1] and there are couple
 bug fixes on review [2], [3]. Another related bug fix was merged for the
 apache 2 control plane [4]. This reduces an amount of undesired main
 Apache process restarts at deploy stage and provides better availability
 for the keystone HAProxy backends.
 
 As a result, the situation looks stable now and there are no issues left
 for the Keystone under mod_wsgi. Thank you all who participated for a
 hard work done.
 
 [0] https://goo.gl/Hi25QG
 [1] https://review.openstack.org/212439
 [2] https://review.openstack.org/209589
 [3] https://review.openstack.org/209924
 

Fixed the links

[0] https://goo.gl/Hi25QG
[1] https://etherpad.openstack.org/p/keystone_wsgi_measuring
[2] https://review.openstack.org/212439
[3] https://review.openstack.org/209589
[4] https://review.openstack.org/209924

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] feedback from the ops mid cycle summit

2015-08-21 Thread Andrew Laski

On 08/20/15 at 05:01pm, melanie witt wrote:

Hi Everyone,

I attended the ops mid cycle summit on Tuesday and Wednesday and here are brief 
notes on the feedback I heard related to nova. Please feel free to add your 
comments or correct me if I got anything wrong.


Large Deployments Session [1]:

- There's a Neutron spec [2] for adding the capability to model an L3 network 
which is composed of L2 networks that are routed together, and this project 
will require cooperation from the nova side
- Cells v2 not coming along as quickly as expected. Cells v1 issues around 
compat between versions, understood it's not supported but it's been a problem
- Hierarchical multi-tenancy isn't yet supported (quotas)


Upgrades Session Report:
- Good linking of features to documentation is important
- Inter-service changes are important to call out
- Flavor migration is an example of something done well


Other general notes:
- Event capture is a choice between two bad options
- Information divided between events and logs. Have to capture both or you lose 
the whole picture
- Hard to trace RPC calls
- Race conditions with scheduling and quotas
- The state of Nova and NUMA is not understood
- Glance v2 is not being used. From what I understand, we can't move to it 
because images created by v1 can't be read by v2, for example?


There was a bug around that, though I don't know the details.  Another 
issue is that v2 doesn't support changes-since which means we can't drop 
it behind the Nova API and maintain backwards compatibility.





All of the etherpads from the event are linked here: 
https://etherpad.openstack.org/p/PAO-ops-meetup


Thanks,
-melanie (irc: melwitt)


[1] https://etherpad.openstack.org/p/PAO-ops-large-deployments
[2] https://review.openstack.org/#/c/196812/






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross-project meeting times

2015-08-21 Thread Thierry Carrez
gord chung wrote:
 maybe we should shift more emphasis to ML? working with non-native
 English speaking companies, i know they are very interested in
 participating but considering the oft hectic pace of the 'live'
 meetings, they tend to be viewers as they can't get their ideas into
 English fast enough.

Well, arguably we already do have cross-project topics discussed on the
ML. Most of the topics raised there are coming from a ML thread and/or
reviews. The trick is that sometimes:

- people ignore the thread. When the topic is raised at the
cross-project meeting they suddenly realize they missed it and have an
opinion on it

- the thread stalls without a clear next action. Talking live about it
is a great way to either check for consensus or give it a second life

- the thread goes nowhere. Factions talk past each other and no
consensus is within sight. Having less latency for a few exchanges
before going back to the thread is useful.

This is why for complex discussions I advocate for lasagna-style
progress: layers of ML threads with meaty IRC meetings in between.

Maybe we should be more explicit about that: require ML discussion
before putting a topic up, and escalate to meeting only in one of the 3
above cases. Then the meeting slot time is more like a booked empty slot
(or set of slots) in everyone's calendar where we can have direct
discussions on topics that otherwise don't seem to go anywhere on the ML.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-21 Thread Kuvaja, Erno
 -Original Message-
 From: Dave Walker [mailto:em...@daviey.com]
 Sent: 21 August 2015 12:43
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [stable] [infra] How to auto-generate stable
 release notes
 
 On 21 August 2015 at 11:38, Thierry Carrez thie...@openstack.org wrote:
 SNIP
  Since then, replying to another concern about common downstream
  reference points, we moved to tagging everything, then replying to
  Clark's pollution remark, to tag from time to time. That doesn't
  remove the need to *conveniently* ship the best release notes we can
  with every commit. Including them in every code tarball (and relying
  on well-known python sdist commands to generate them for those
  consuming the git tree directly) sounded like the most accessible way
  to do it, which the related thread on the Ops ML confirmed. But then
  I'm (and maybe they are) still open to alternative suggestions...
 
 This is probably a good entry point for my ACTION item from the cross-
 project meeting:
 
 I disagree that time to time tagging makes sense in what we are trying to
 achieve.  I believe we are in agreement that we want to move way from co-
 ordinated releases and treat each commit as an accessible release.
 Therefore, tagging each project at arbitrary times introduces snowflake
 releases, rather than the importance being on each commit being a release.
 
 I agree that this would take away the 'co-ordinated' part of the release, but
 still requires release management of each project (unless the time to time
 is automated), which we are not sure that each project will commit to.
 
 If we are treating each commit to be a release, maybe we should just bite
 the bullet and enlarge the ref tag length.  I've not done a comparison of what
 this would look like, but I believe it to be rare that people look at the list
 anyway.  Throwing in a | grep -v ^$RELEASE*, and it becomes as usable as
 before.  We could also expunge the tags after the release is no longer
 supported by upstream.
 
 In my mind, we are then truly treating each commit as a release AND we
 benefit from not needing hacky tooling to fake this.
 
 --
 Kind Regards,
 Dave Walker
 

I do not like about the time to time tagging either, but I don't think it's 
totally horrible situation. Lets say we tag every even week Wednesday and in 
event of OSSA.

The big problem with every commit being release in stable is that lots of 
tooling around git really doesn't care if the reference is branch or tag in 
branch X. Say I can't remember how I named my branch I'm working on and I do 
`git checkout tabtab` there is difference if that list suddenly is on 
hundreds rather than dozens. So yes some level of deprecation period to clean 
those old tags would be great at the point we stop support for certain branches.

I do realize that I'm not the git guru, so if there is really simple way to 
configure that, please let me know and ignore the above. ;)

- Erno
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] PTL/TC candidate workflow proposal for next elections

2015-08-21 Thread Tristan Cacqueray
Hello folks,

as discussed previously, we'd like to improve elections workflow using
gerrit:

* A new repository to manage elections: openstack/election
* Candidates submit their candidacy through a file as a CR, e.g.:
  sept-2015-ptl/project_name-candidate_name
* A check job verifies if the candidate is valid (has ATC and
  contributor to the project)
* Elections officials +2 the review
* Once merged, a post jobs could publish the candidacy to openstack-dev@
  and to the wiki.


Automated jobs would be great, but the first iteration could be managed
using manual tools.

While this workflow doesn't tackle actual elections (using CIVS), it
should already greatly help elections officials.

Thought ?


Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Contract of ShareDriver.deny_access

2015-08-21 Thread Ben Swartzlander

[Resending my response as unknown forces ate my original message]

On 08/20/2015 08:30 AM, Bjorn Schuberg wrote:

Hello everyone,

this is my first thread on this mailing list, and I would like to take 
the opportunity to say that it was great to see you all at the 
midcycle, even if remote.


Now, to my question; I've been looking into an issue that arise when 
deleting access to a share, and then momentarily after, deleting the 
same share. The delete fails due to a race in 
`_remove_share_access_rules` in the `ShareManager`, which attempts to 
delete all granted permissions on the share before removing it, but 
one of the access permissions is concurrently deleted due to the first 
API call, see;

https://github.com/openstack/manila/blob/master/manila/share/manager.py#L600

I think an acceptable fix to this would be to wrap the `_deny_access` 
call with a `try`... `except` block, and log any attempts to remove 
non-existing permissions. The problem is that there seems to be no 
contract on the exception to throw in case you attempt to delete an 
`access` which does not exist -- each driver behaves differently.


This got my attention after running the tempest integration tests, 
where the teardown /sometimes/ fails in 
tempest.api.share.test_rules:ShareIpRulesForNFSTest.


Any thoughts on this? Perhaps there is a smoother approach that I'm 
not seeing.


This is a good point. I'm actually interested in purusing a deeper 
overhaul of the allow/deny access logic for Mitaka which will make 
access rules less error prone in my opinion. I'm open to short term bug 
fixes in Liberty for problems like the one you mention, but I'm already 
planning a session in Tokyo about a new share access driver interface. 
The reason it has to wait until Mitaka is that all of the drivers will 
need to change their logic to accommodate the new method.


My thinking on access rules is that the driver interface which adds and 
removes rules one at a time is too fragile, and assumes too much about 
what backends are capable of supporting. I see the following problems 
(in addition to the one you mention):
* If addition or deletion of a rules fails for any reason, the set of 
rules on the backend starts to differ from what the user intended and 
there is no way to go back and correct the problem.
* If backends aren't able to implement rules exactly as Manila expects, 
(perhaps a backend does not support nested subnets with identical rules) 
then there are situations where a certain set of user actions will be 
guaranteed to result in broken rules. Consider (1) add rw access to 
10.0.0.0/8 (2) Add rw access to 10.10.0.0/16 (3) Remove rw access to 
10.0.0.0/8 (4) Try to access share from 10.10.10.10. If step (2) fails 
because the backend ignored that rule (it was redundant at the time it 
was added) then step (4) will also fail, even though it shouldn't.
* The current mechanism doesn't allow making multiple changes atomically 
-- changes have to be sequential. This will case problems if we want to 
allow access rules to be defined externally (something which was 
discussed during Juno and is still desirable) because changes to access 
rules may come in batches.


My proposal is simple. Rather than making drivers implement 
allow_access() and deny_access(), driver should implement a single 
set_access() which gets passed a list of all the access rules. Drivers 
would be required to compare the list of rules passed in from the 
manager to the list of rules on the storage controller and make changes 
as appropriate. For some drivers this would be more work but for other 
drivers it would be less work. Overall I think it's a better design. We 
can probably implement some kind of backwards compatibility to avoid 
breaking drivers during the migration to the new interface.


It's not something I intend to push for in Liberty however.

-Ben



Cheers,
Björn


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [nova] Image Prefetching – Precaching

2015-08-21 Thread Alberto Geniola
Hi John,

First of all thank you for the tip about tags.

In order to understand what extra API are needed to implement this
mechanism, I think we need to focus on what should be done by this
component.

In my opinion, what we need on NOVA-COMPUTE side would be:
1) List all images prefetched on each nova-compute node
2) Trigger an image prefetch on a particular node
3) Remove an image from the cache

The rest of mechanism, in my opinion, is an Horizon plugin that interacts
with Glance and calls appropriate APIs above.

In order to avoid duplication on the compute nodes, I think that images
might be cached into /var/lib/nova/instances/_base/ , so they are
immediately ready to be used by nova-compute.
Given that, if the user inhibits the image deletion after 1 day of unusage
from the conf file, this mechanism may be very helpful.

In other words by implementing API listed above in nova, alongside
developing a simple horizon plugin, we should be able to provide this
interesting feature.

Do you see any problem with this approach?




On Thu, Aug 20, 2015 at 12:47 PM, John Garbutt j...@johngarbutt.com wrote:

 On 19 August 2015 at 19:30, Alberto Geniola albertogeni...@gmail.com
 wrote:
 
  Hi everyone,
 
 
  This is my first email to the OS developer forum, so forgive me if I
 misplaced the subject tags J.

 Welcome!

 For future posts, I hope this helps make it easier:
 https://wiki.openstack.org/wiki/MailingListEtiquette


  Straight to the point: for a project we’re involved in, we think that a
 pre-fetcher mechanism would be great for a variety of use cases. There was
 an attempt with this blueprint:
 
 
 https://blueprints.launchpad.net/nova/+spec/nova-image-cache-management-2
 
  and a more recent one:
 
  https://blueprints.launchpad.net/python-novaclient/+spec/prefetch-image
 
  although both seems to be dead now.
 
  So I really want to get a feedback from the developer’s community
 whether (1) such a feature makes sense in general, and (2) it may be worth
 the integration of such a component in the OpenStack code. In fact,
 although we can solve our problem by developing an in-house component, it
 would be better to have it integrated in OpenStack, including Nova and
 Horizon, so I need the feedback from the OS Guru guys J.
 
 
  What do you think?

 I think it does make some sense.

 The disagreement in the past is about agreeing what should live inside
 Nova, and what should live outside Nova.

 For me, I am OK having an API to trigger an image prefetch on a
 particular host (although in many cases doing a build on that host has
 the same affect).

 I think the rest of that mechanism should probably live outside of
 Nova and Glance. So the question is what extra API are required to
 create such a tool.

 Thanks,
 johnthetubaguy

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Dott. Alberto Geniola

  albertogeni...@gmail.com
  +39-346-6271105
  https://www.linkedin.com/in/albertogeniola

Web: http://www.hw4u.it
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross-project meeting times

2015-08-21 Thread gord chung



On 21/08/2015 4:58 AM, Thierry Carrez wrote:

Anne Gentle wrote:

Hi all,

In last week's TC Highlights blog post [1] I asked if there is interest
in moving the cross-project meeting. Historically it is held after the
TC meeting, but there isn't a requirement for those timings to line up.
I've heard from European and Eastern Standard Time contributors that
it's a tough time to meet half the year. It's also a bit early for APAC,
my apologies for noting this but still proposing to meet earlier.

I'd like to propose a new cross-project meeting time, 1800 Tuesdays. To
that end I've created a review with the proposed time:

https://review.openstack.org/214605

Please take a look, see if you think it could work, and let us know
either on this list or the review itself.

Commented on the review... I think 1800 UTC is not significantly more
convenient for Europeans (dinner hours between 1700 and 1900 UTC)
compared to 2100 UTC. It makes it more convenient for East-of-Moscow
Russians, but we lose Australia in the process.

If we are to lose Australia anyway, I would move even earlier (say 15:00
or 16:00 UTC) and cover China - US West. That could be a good rotation
with the one at 21:00 UTC which covers Australia - West Europe.



+1 for even earlier. it requires west coast Americas to wake up early 
but the biggest time gap is the Pacific. that said, it's entirely 
possible most of those participating are from west coast...


maybe we should shift more emphasis to ML? working with non-native 
English speaking companies, i know they are very interested in 
participating but considering the oft hectic pace of the 'live' 
meetings, they tend to be viewers as they can't get their ideas into 
English fast enough.


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] [neutron] libnetwork endpoint to Neutron abstractions

2015-08-21 Thread Antoni Segura Puimedon
Hi list,

I was reviewing the CreateEndpoint patch[1] from Taku that had received
positive reviews. I put some comment about an alternative way to map
endpoints to nets and subnets and I would appreciate some discussion here
on the mailing list about the original proposal and the alternative I
mentioned.

Regards,

Toni


=
[1] https://review.openstack.org/#/c/210052/9//COMMIT_MSG
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Keystone][mod_wsgi] A summary for the A/A HA issues had been fixed

2015-08-21 Thread Bogdan Dobrelya
Our deployment automation teams have been addressing the critical issues
arised for HA A/A Keystone switched to the mod_wsgi and related
backends' failure modes.
We ran numerous rally tests against different configuration layouts for
Apache2 MPM worker versus Keystone and HAProxy backend config versus
deprecated eventlet case. Raw results [0] - no pretty view, sorry.

We made a conclusion based on the tests results [1] and there are couple
bug fixes on review [2], [3]. Another related bug fix was merged for the
apache 2 control plane [4]. This reduces an amount of undesired main
Apache process restarts at deploy stage and provides better availability
for the keystone HAProxy backends.

As a result, the situation looks stable now and there are no issues left
for the Keystone under mod_wsgi. Thank you all who participated for a
hard work done.

[0] https://goo.gl/Hi25QG
[1] https://review.openstack.org/212439
[2] https://review.openstack.org/209589
[3] https://review.openstack.org/209924

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PTL/TC candidate workflow proposal for next elections

2015-08-21 Thread Jeremy Stanley
On 2015-08-21 14:20:00 + (+), Tristan Cacqueray wrote:
[...]
 * A check job verifies if the candidate is valid (has ATC and
   contributor to the project)
[...]
 Automated jobs would be great, but the first iteration could be
 managed using manual tools.
[...]

Yep, the tricky bit here is in automating the confirmation. What are
election officials normally doing to manually accomplish this?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] DVR L2 agent is removing the br-int OVS flows

2015-08-21 Thread Anna Kamyshnikova
I pushed change for that case https://review.openstack.org/215596.

On Fri, Aug 21, 2015 at 2:45 PM, Anna Kamyshnikova 
akamyshnik...@mirantis.com wrote:

 Hi, Artur!

 Thanks, for bringing this up! I missed that. I push change for that in a
 short time.

 On Fri, Aug 21, 2015 at 1:35 PM, Korzeniewski, Artur 
 artur.korzeniew...@intel.com wrote:

 Hi all,

 After merging the “Graceful ovs-agent restart”[1] (great work BTW!), I’m
 seeing in DVR L2 agent code place where flows on br-int is removed in old
 style:



 File
 /neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py

 200 def setup_dvr_flows_on_integ_br(self):

 201 '''Setup up initial dvr flows into br-int'''

 202 if not self.in_distributed_mode():

 203 return

 204

 205 LOG.info(_LI(L2 Agent operating in DVR Mode with MAC %s),

 206  self.dvr_mac_address)

 207 # Remove existing flows in integration bridge

 208 self.int_br.delete_flows()



 This is kind of bummer when seeing the effort to preserve the flows in
 [1].

 This should not affect VM network access, since the br-tun is configured
 properly and br-int is in learning mode.



 Should this be fixed in Liberty cycle?



 This is something similar to submitted bug:
 https://bugs.launchpad.net/neutron/+bug/1436156



 [1] https://bugs.launchpad.net/neutron/+bug/1383674



 Regards,

 Artur Korzeniewski

 

 Intel Technology Poland sp. z o.o.

 KRS 101882

 ul. Slowackiego 173, 80-298 Gdansk



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,
 Ann Kamyshnikova
 Mirantis, Inc




-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Consistent functional test failures (seems infra not have enough resource)

2015-08-21 Thread Jeremy Stanley
On 2015-08-21 01:10:22 + (+), Steven Dake (stdake) wrote:
[...]
 How large is /opt?
[...]

It appears at the moment HP Cloud gives us a 30GiB root filesystem
(vda1) and a 0.5TiB ephemeral disk (vdb). Rackspace on the other
hand provides a 40GB root filesystem (xvda1) and 80GB ephemeral disk
(xvde). If your jobs are using devstack-gate, have a look at
fix_disk_layout() in functions.sh for details on how we repartition,
format and mount ephemeral disks. If your job is not based on
devstack-gate, then you should be able to implement some similar
routines to duplicate this.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Contract of ShareDriver.deny_access

2015-08-21 Thread Thomas Bechtold
Hi Björn,

On Thu, 2015-08-20 at 14:30 +0200, Bjorn Schuberg wrote:
 Hello everyone,
 
 this is my first thread on this mailing list, and I would like to
 take the opportunity to say that it was great to see you all at the
 midcycle, even if remote.

Yeah. It was a nice meetup!

 Now, to my question; I've been looking into an issue that arise when
 deleting access to a share, and then momentarily after, deleting the
 same share. The delete fails due to a race in
 `_remove_share_access_rules` in the `ShareManager`, which attempts to
 delete all granted permissions on the share before removing it, but
 one of the access permissions is concurrently deleted due to the
 first API call, see;
 https://github.com/openstack/manila/blob/master/manila/share/manager.
 py#L600

Can you please fill a bug report? https://bugs.launchpad.net/manila . 


TIA

Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-21 Thread Thierry Carrez
Dave Walker wrote:
 On 21 August 2015 at 11:38, Thierry Carrez thie...@openstack.org wrote:
 SNIP
 Since then, replying to another concern about common downstream
 reference points, we moved to tagging everything, then replying to
 Clark's pollution remark, to tag from time to time. That doesn't
 remove the need to *conveniently* ship the best release notes we can
 with every commit. Including them in every code tarball (and relying on
 well-known python sdist commands to generate them for those consuming
 the git tree directly) sounded like the most accessible way to do it,
 which the related thread on the Ops ML confirmed. But then I'm (and
 maybe they are) still open to alternative suggestions...
 
 This is probably a good entry point for my ACTION item from the
 cross-project meeting:
 
 I disagree that time to time tagging makes sense in what we are
 trying to achieve.  I believe we are in agreement that we want to move
 way from co-ordinated releases and treat each commit as an accessible
 release.  Therefore, tagging each project at arbitrary times
 introduces snowflake releases, rather than the importance being on
 each commit being a release.
 
 I agree that this would take away the 'co-ordinated' part of the
 release, but still requires release management of each project (unless
 the time to time is automated), which we are not sure that each
 project will commit to.

Thanks for this. I agree that time-to-time is far from being a perfect
solution. The question is more, is it better or worse than the other
solution (tag-every-commit). To summarize:

Tag-every-commit:
(+) Conveys clearly that every commit is consumable
(-) Current tooling doesn't support this, we need to write something
(-) Zillions of tags will make tags ref space a bit unusable by humans

Time to time tagging:
(+) Aligned with how we do releases everywhere else
(-) Makes some commits special
(-) Making a release still requires someone to care

Missing anything ?

 If we are treating each commit to be a release, maybe we should just
 bite the bullet and enlarge the ref tag length.  I've not done a
 comparison of what this would look like, but I believe it to be rare
 that people look at the list anyway.  Throwing in a | grep -v
 ^$RELEASE*, and it becomes as usable as before.  We could also
 expunge the tags after the release is no longer supported by upstream.
 
 In my mind, we are then truly treating each commit as a release AND we
 benefit from not needing hacky tooling to fake this.

What hacky tooling you are thinking about here ? If anything,
time-to-time tagging reuses the release process we have for everything
else, so it doesn't require any additional tooling. It's tag every
commit that requires some hack to happen.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] contextlib.nested and Python3 failing

2015-08-21 Thread Kevin L. Mitchell
On Wed, 2015-08-19 at 16:51 -0700, Sylvain Bauza wrote:
 I was writing some tests so I added a contextlib.nested to a checked 
 TestCase [1]. Unfortunately, contextlib.nested is no longer available in 
 Python3 and there is no clear solution on how to provide a compatible 
 import for both python2 and python3:
   - either providing a python3 compatible behaviour by using 
 contextlib.ExitStack but that class is not available in Python 2
   - or provide contextlib2 for python2 (and thus adding it to the 
 requirements)

Actually, there should no longer be a need to use contextlib.nested.
We've explicitly dropped Python 2.6 compatibility, which means we're
expecting compatibility with Python 2.7+ only, and as of Python 2.7, the
'with' statement supports accepting multiple 'as' clauses.  The
contextlib.nested tool was really only necessary to work around that
functionality being missing in Python 2.6, and has been deprecated as of
Python 2.7 because it's no longer necessary.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PTL/TC candidate workflow proposal for next elections

2015-08-21 Thread Thierry Carrez
Tristan Cacqueray wrote:
 Hello folks,
 
 as discussed previously, we'd like to improve elections workflow using
 gerrit:
 
 * A new repository to manage elections: openstack/election
 * Candidates submit their candidacy through a file as a CR, e.g.:
   sept-2015-ptl/project_name-candidate_name
 * A check job verifies if the candidate is valid (has ATC and
   contributor to the project)
 * Elections officials +2 the review
 * Once merged, a post jobs could publish the candidacy to openstack-dev@
   and to the wiki.
 
 
 Automated jobs would be great, but the first iteration could be managed
 using manual tools.
 
 While this workflow doesn't tackle actual elections (using CIVS), it
 should already greatly help elections officials.

Sounds way more reliable (and less noisy) than (ab)using the ML to
achieve the same result.

+1

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][bgpvpn] Service Plugin vs Service driver

2015-08-21 Thread Jan Scheurich
Hi all,

I am in favor not to go for a least common denominator approach with the 
bgpvpn API. The API should cover the use case commonly acknowledged as
useful and which are supported by at least one of the existing back-ends,
with the aim to have various back-ends to grow in support coverage.

Not supported features of the API could either be rejected by drivers, or
a fallback behavior can be specified on API level in case a specific non-vital 
attribute is not supported by a backend (e.g. in the case of the RD).

My preference would be to stick to the provider framework and to allow
most backend-specific drivers to profit from the boilerplate code in the 
service plugin.

BTW: The ODL plugin is planned to support both Network and Router 
association and the RD attribute will not be a mandatory attribute for
the ODL back-end.

Regards, 
Jan


Mathieu Rohon Wed, 19 Aug 2015 06:46:45 -0700 
Hi,

thanks for your reply irena and salvatore.

Currently, we're targeting 4 backends : bagpipe (the ref impelmentations
compatible with other ref implementations of neutron), ODL, contrail and
nuage.
Contrail and bagpipe work with networks attachments to a bgpvpn connection,
while ODL and Nuage work with routers attachments. We even start thinking
about port attachments [1]
Moreover, ODL needs a RD attribute that won't be supported by other
backends.

I think that each backend should be able to manage each kind of attachment
in the future, depending on the will of the backend dev team. But in a
firts step, we have to manage the capacity of each backend.

So, indeed, the managment of attachments to a bgpvpn connection through the
use of extensions will expose backend capacity. And I agree that it's not
the good way, since when moving from one cloud to another, the API will
change depending on the backend.

So I see two ways to solve this issue :
1-In first releases, backends that don't support a feature will through a
'NotImplemented exception when the feature will be called through the
API; We still have an inconsistent API, but hopefully, this gone be
temporary.
2-reducing the scope of the spec [2] and having less compatible backends,
and a smaller community for the bgpvpn project.

[1]https://blueprints.launchpad.net/bgpvpn/+spec/port-association
[2]https://review.openstack.org/#/c/177740/

regards,

Mathieu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Resend] [api] Who owns API versioning and deprecation policy?

2015-08-21 Thread Geoff Arnold
After reading the following pages, it’s unclear what the current API 
deprecation policy is and who owns it. (The first spec implies that a change 
took place in May 2015, but is silent on what and why.) Any hints? An 
authoritative doc would be useful, something other than an IRC log or mailing 
list reference.

Geoff

http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html

https://wiki.openstack.org/wiki/API_Working_Group

https://wiki.openstack.org/wiki/Application_Ecosystem_Working_Group


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] modifying the 'is it packaged' test

2015-08-21 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2015-08-20 15:24:03 +1200:
 We currently have a test where we ask if things are packaged in
 distros. 
 http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n268
 
 I think we should modify that, in two ways.
 
 The explanation for the question ignores a fairly large audience of
 deployers who don't wait for distributions - so they too need to
 package things, but unlike distributions packaging stuff is itself
 incidental to their business, rather than being it. So I think we
 should consider their needs too.
 
 Secondly, all the cases of this I've seen so far we've essentially
 gone 'sure, fine'. I think thats because there's really nothing to
 them.
 
 So I think the test should actually be something like:
 Apply caution if it is not packaged AND packaging it is hard.
 Things that make packaging a Python package hard:
  - nonstandard build systems
  - C dependencies that aren't already packaged
  - unusual licences
 
 E.g. things which are easy, either because they can just use existing
 dependencies, or they're pure python, we shouldn't worry about.
 
 -Rob
 

I think this interpretation is fine. It's more or less what I've been
doing anyway.

Is it safe to assume that if a package is available on PyPI and can be
installed with pip, packaging it for a distro isn't technically
difficult? (It might be difficult due to vendoring, licensing, or some
other issue that would be harder to test for.)

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Resend] [api] Who owns API versioning and deprecation policy?

2015-08-21 Thread Everett Toews
On Aug 21, 2015, at 3:13 PM, Geoff Arnold ge...@geoffarnold.com wrote:

 After reading the following pages, it’s unclear what the current API 
 deprecation policy is and who owns it. (The first spec implies that a change 
 took place in May 2015, but is silent on what and why.) Any hints? An 
 authoritative doc would be useful, something other than an IRC log or mailing 
 list reference.
 
 Geoff
 
 http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
 
 https://wiki.openstack.org/wiki/API_Working_Group
 
 https://wiki.openstack.org/wiki/Application_Ecosystem_Working_Group

The API Working Group does. 

Guidelines for microversioning [1] and when to bump a microversion [2] are 
currently in review. Naturally your feedback is welcome.

We have yet to provide guidance on deprecation. If you’d like to create a 
guideline on deprecation, here’s How to Contribute [3]. If you want to throw 
some ideas around we’re in #openstack-api or feel free to drop by one of our 
meetings [4].

Everett

[1] https://review.openstack.org/#/c/187112/
[2] https://review.openstack.org/#/c/187896/
[3] https://wiki.openstack.org/wiki/API_Working_Group#How_to_Contribute
[4] https://wiki.openstack.org/wiki/Meetings/API-WG


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PTL/TC candidate workflow proposal for next elections

2015-08-21 Thread Jeremy Stanley
On 2015-08-21 14:32:50 -0400 (-0400), Anita Kuno wrote:
 Personally I would recommend that the election officials have
 verification permissions on the proposed repo and the automation
 step is skipped to begin with as a way of expediting the repo
 creation. Getting the workflow in place in enough time that
 potential candidates can familiarize themselves with the change,
 is of primary importance I feel. Automation can happen after the
 workflow is in place.

Agreed, I'm just curious what our options actually are for
automating the confirmation research currently performed. It's
certainly not a prerequisite for using the new repo/workflow in a
manually-driven capacity in the meantime.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Update on Angular Identity work

2015-08-21 Thread Thai Q Tran
Hi Doug,I think your point is valid, but it would basically move the point of conflict from the HTML page to the controller. You could alleviate that problem by having services, aka service for headers, service for table batch action, etc that could then follow similar to the angular workflow plugin pattern we discussed at the midcycle https://blueprints.launchpad.net/horizon/+spec/angular-workflow-plugin (Lin, this is how angular does inheritance).We would also need to follow up this work by enhancing some of the existing directives. Let's take the action-list directive as an example. Currently, you will have to list those actions out manually, so we would have to enhance it by allowing users to add in their own JSON list and have the directive render the full content. Theoretically, that should get us to a point where you can extend actions, workflows, and possibly even columns.Hi Lin,Basically, the problem that I am seeing is: we are trading semantic readability, customizability, and ease of LEARNING vs extensibility, complexity, and ease of USE. I agree that we should set a solid example before we let the flood gate open. This is a great discussion, now I'm more resolved to find a pattern that could give us more without the tradeoffs. We have a great community with many smart folks, I'm sure we'll figure something out.-Lin Hua Cheng os.lch...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: Lin Hua Cheng os.lch...@gmail.comDate: 08/20/2015 09:45PMSubject: Re: [openstack-dev] [Horizon] Update on Angular Identity workHi Thai,From your example, option 1 seems closer to the *current pattern* not option 2. :) Where the user define a list of action separately from the table presentation (HTML template) rather than embedding it in the HTML. And if the user wants to extend it, they just add it to the list of columns/actions on the table class.Option 1 seems better to me due to I find it closer to the current pattern. As long as we can reduce the duplicate code (not having to write 9 files to create one table), I'm good with that. :)My main concern is really to polish first the initial table implementation before folks jump into implementing the tables in all other panels. So we can avoid re-work, don't want another cycle of clean-up/refactor. :)I think we already have 2 angular tables out, should be enough data to figure out what duplicate code can be abstracted out based from those two implementation.-LinOn Thu, Aug 20, 2015 at 4:36 PM, Doug Fish the.doug.f...@gmail.com wrote:It appears to me that option 1 would be better prepared to be extensible ... That is if a plugin needed to add an action or a column, we could make that happen with pattern 1 (possibly after adding in a service) I'm not sure how plugins ever add these things with pattern 2.On Aug 20, 2015, at 1:41 PM, "Thai Q Tran" tqt...@us.ibm.com wrote:Hi Lin,Let me draw on some examples to help clarify what I mean.Option 1:table.controller.jsctrl.headers = { gettext('column 1'), gettext('column 2')};ctrl.noItemMessage = gettext('You no longer have any items in your table. You either lack the sufficient priveledges or your search criteria is not valid');ctrl.batchActionList = [ { name: 'create', onclick: somefunction, etc } { name: 'delete', onclick: somefunction, etc }];ctrl.rowActionList = [ { name: 'edit', onclick: somefunction, etc } { name: 'delete', onclick: somefunction, etc }];table.html---divng-controller="table.controller.js as ctrl" horizon-table  headers="ctrl.headers"  batch-actions="ctrl.batchActionList"  row-actions="ctrl.rowActionList" /horizon-table/divSo now your controller is polluted with presentation and translation logic. In addition,we will have to live with long gettext messages and add eslint ignore rules just to pass it.The flip side is that you do have a simple directive that points to a common template sitting somewhere.It is not that much "easier" to the example below. What we're really doing is defining the same presentation logic, but in the HTML instead. Lastly,I'll bring up the customization again because many products are going to want to customize their tables. They maybe the minority but that doesn't mean we shouldn't support them.Option 2:table.htmltable ng-controller="table.controller.js as ctrl"thead tr  action-list   action callback="someFunc" translateCreate/action   action callback="someFunc" translateDelete/action  /action-list /tr tr  th translateColumn 1/th  th translateColumn 2/th /tr/theadtbody tr ng-repeat="items in ctrl.items"  td/td  tdaction-list   action callback="someFunc" translateEdit/action   action callback="someFunc" translateDelete/action  /action-list/td /tr/tbody/tableHere, your table.controller.js worries ONLY about data and data manipulation. The presentation logic all resides in the HTML. If I want to add icons in the table header, I can do 

Re: [openstack-dev] [all] PTL/TC candidate workflow proposal for next elections

2015-08-21 Thread Anita Kuno
On 08/21/2015 03:37 PM, Jeremy Stanley wrote:
 On 2015-08-21 14:32:50 -0400 (-0400), Anita Kuno wrote:
 Personally I would recommend that the election officials have
 verification permissions on the proposed repo and the automation
 step is skipped to begin with as a way of expediting the repo
 creation. Getting the workflow in place in enough time that
 potential candidates can familiarize themselves with the change,
 is of primary importance I feel. Automation can happen after the
 workflow is in place.
 
 Agreed, I'm just curious what our options actually are for
 automating the confirmation research currently performed. It's
 certainly not a prerequisite for using the new repo/workflow in a
 manually-driven capacity in the meantime.
 

Fair enough. I don't want to answer the question myself as I feel it's
best for the response to come from current election officials.

Thanks Jeremy,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] feedback from the ops mid cycle summit

2015-08-21 Thread Anita Kuno
On 08/21/2015 10:24 AM, Andrew Laski wrote:
 On 08/20/15 at 05:01pm, melanie witt wrote:
 Hi Everyone,

 I attended the ops mid cycle summit on Tuesday and Wednesday and here
 are brief notes on the feedback I heard related to nova. Please feel
 free to add your comments or correct me if I got anything wrong.


 Large Deployments Session [1]:

 - There's a Neutron spec [2] for adding the capability to model an L3
 network which is composed of L2 networks that are routed together, and
 this project will require cooperation from the nova side

The current spec flushed out the recognition that further design work is
necessary. Discussion arrived at the hope of having a small number of
people design an L3 object (bringing input from usecases of others) at
an upcoming face to face event (possibly summit). Carl Baldwin is
deciding how best to organize this and mentioned the possibility of a
post to the mailing list (I'm assuming -dev).

Thanks for the summary Melanie,
Anita.

 - Cells v2 not coming along as quickly as expected. Cells v1 issues
 around compat between versions, understood it's not supported but it's
 been a problem
 - Hierarchical multi-tenancy isn't yet supported (quotas)


 Upgrades Session Report:
 - Good linking of features to documentation is important
 - Inter-service changes are important to call out
 - Flavor migration is an example of something done well


 Other general notes:
 - Event capture is a choice between two bad options
 - Information divided between events and logs. Have to capture both or
 you lose the whole picture
 - Hard to trace RPC calls
 - Race conditions with scheduling and quotas
 - The state of Nova and NUMA is not understood
 - Glance v2 is not being used. From what I understand, we can't move
 to it because images created by v1 can't be read by v2, for example?
 
 There was a bug around that, though I don't know the details.  Another
 issue is that v2 doesn't support changes-since which means we can't drop
 it behind the Nova API and maintain backwards compatibility.
 


 All of the etherpads from the event are linked here:
 https://etherpad.openstack.org/p/PAO-ops-meetup


 Thanks,
 -melanie (irc: melwitt)


 [1] https://etherpad.openstack.org/p/PAO-ops-large-deployments
 [2] https://review.openstack.org/#/c/196812/

 
 
 
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] netaddr and abbreviated CIDR format

2015-08-21 Thread Jay Pipes

On 08/21/2015 02:34 PM, Sean M. Collins wrote:

So - the tl;dr is that I don't think that we should accept inputs like
the following:

x   - 192
x/y - 10/8
x.x/y   - 192.168/16
x.x.x/y - 192.168.0/24

which are equivalent to::

x.0.0.0/y   - 192.0.0.0/24
x.0.0.0/y   - 10.0.0.0/8
x.x.0.0/y   - 192.168.0.0/16
x.x.x.0/y   - 192.168.0.0/24


Agreed completely.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Brocade CI

2015-08-21 Thread Angela Smith
Mike,
I wanted to update you on our progress on the Brocade CI.
We are currently working on the remaining requirements of adding recheck and 
adding link to wiki page for a failed result.
Also, the CI is now consistently testing and reporting on all cinder reviews 
for the past 3 days.
Thanks,
Angela

From: Nagendra Jaladanki [mailto:nagendra.jalada...@gmail.com]
Sent: Thursday, August 13, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: Re: [openstack-dev] [cinder] Brocade CI

Ramy,
Thanks for providing the correct message. We will update our commit message 
accordingly.
Thanks,
Nagendra Rao

On Thu, Aug 13, 2015 at 4:43 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Hi Nagendra,

Seems one of the issues is the format of the posted comments. The correct 
format is documented here [1]

Notice the format is not correct:
Incorrect: Brocade Openstack CI (non-voting) build SUCCESS logs at: 
http://144.49.208.28:8000/build_logs/2015-08-13_18-19-19/
Correct: * test-name-no-spaces http://link.to/result : [SUCCESS|FAILURE] some 
comment about the test

Ramy

[1] 
http://docs.openstack.org/infra/system-config/third_party.html#posting-result-to-gerrit

From: Nagendra Jaladanki 
[mailto:nagendra.jalada...@gmail.commailto:nagendra.jalada...@gmail.com]
Sent: Wednesday, August 12, 2015 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: brocade-openstack...@brocade.commailto:brocade-openstack...@brocade.com
Subject: Re: [openstack-dev] [cinder] Brocade CI

Mike,

Thanks for your feedback and suggestions. I had send my response yesterday but 
looks like didn't get posted on the 
lists.openstack.orghttp://lists.openstack.org. Hence posting it here again.

We reviewed your comments and following issues were identified and some of them 
are fixed and some fix plans in progress:

1) Not posting success or failure
 The Brocade CI is a non-voting CI. The CI is posting the comment for build 
sucucess or failures. The report tool is not seeing these. We are working on 
correcting this.
2) Not posting a result link to view logs.
   We could not find any cases where CI is failed to post the link to logs from 
the generated report.  If you have any specific uses where it failed to post 
logs link, please share with us. But we did see that CI not posted the comment 
at all for some review patch sets. Root causing the issue why CI not posted the 
comment at all.
3) Not consistently doing runs.
   There were planned down times and CI not posted during those periods. We 
also observed that CI was not posting the failures in some cases where CI 
failed due non openstack issues. We corrected this. Now the CI should be 
posting the results for all patch sets either success or failure.
We are also doing the following:
- Enhance the message format to be inline with other CIs.
- Closely monitoring the incoming Jenkin's request vs out going builds and 
correcting if there are any issues.

Once again thanks for your feedback and suggestions. We will continue to post 
this list on the updates.

Thanks  Regards,

Nagendra Rao Jaladanki

Manager, Software Engineering Manageability Brocade

130 Holger Way, San Jose, CA 95134

On Sun, Aug 9, 2015 at 5:34 PM, Mike Perez 
thin...@gmail.commailto:thin...@gmail.com wrote:
People have asked me at the Cinder midcycle sprint to look at the Brocade CI
to:

1) Keep the zone manager driver in Liberty.
2) Consider approving additional specs that we're submitted before the
   deadline.

Here are the current problems with the last 100 runs [1]:

1) Not posting success or failure.
2) Not posting a result link to view logs.
3) Not consistently doing runs. If you compare with other CI's there are plenty
   missing in a day.

This CI does not follow the guidelines [2]. Please get help [3].

[1] - http://paste.openstack.org/show/412316/
[2] - 
http://docs.openstack.org/infra/system-config/third_party.html#requirements
[3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [nova][cinder] Extending attached disks

2015-08-21 Thread Walter A. Boring IV
This isn't as simple as making calls to virsh after an attached volume 
is extended on the cinder backend, especially when multipath is involved.
You need the host system to understand that the volume has changed size 
first, or virsh will really never see it.


For iSCSI/FC volumes you need to issue a rescan on the bus (iSCSI 
session, FC fabric),  and then when multipath is involved, it gets quite 
a bit more complex.


This lead to one of the sticking points with doing this at all, is 
because when cinder extends the volume, it needs to tell nova that it 
has happened, and the nova (or something on the compute node), will have 
to issue the correct commands in sequence for it all to work.


You'll also have to consider multi-attached volumes as well, which adds 
yet another wrinkle.


A good quick source of some of the commands and procedures that are 
needed you can see here:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/online-logical-units.html


You can see that volumes with multipath requires a lot of hand holding 
to be done correctly.  It's non trivial.  I see this as being very error 
prone, and any failure

in the multipath process could lead to big problems :(

Walt

Hi everyone,

Apologises for the duplicate send, looks like my mail client doesn't create 
very clean HTML messages. Here is the message in plain-text. I'll make sure to 
send to the list in plain-text from now on.

In my current pre-production deployment we were looking for a method to live 
extend attached volumes to an instance. This was one of the requirements for 
deployment. I've worked with libvirt hypervisors before so it didn't take long 
to find a workable solution. However I'm not sure how transferable this will be 
across deployment models. Our deployment model is using libvirt for nova and 
ceph for backend storage. This means obviously libvirt is using rdb to connect 
to volumes.

Currently the method I use is:

- Force cinder to run an extend operation.
- Tell Libvirt that the attached disk has been extended.

It would be worth discussing if this can be ported to upstream such that the 
API can handle the leg work, rather than this current manual method.

Detailed instructions.
You will need: volume-id of volume you want to resize, hypervisor_hostname and 
instance_name from instance volume is attached to.

Example: extending volume f9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to 
instance-0012 on node-6 to 100GB

$ cinder reset-state --state available f9fa66ab-b29a-40f6-b4f4-e9c64a155738
$ cinder extend f9fa66ab-b29a-40f6-b4f4-e9c64a155738 100
$ cinder reset-state --state in-use f9fa66ab-b29a-40f6-b4f4-e9c64a155738

$ssh node-6
node-6$ virsh qemu-monitor-command instance-0012 --hmp info block | grep 
f9fa66ab-b29a-40f6-b4f4-e9c64a155738
drive-virtio-disk1: removable=0 io-status=ok 
file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=keyhere==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789
 ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0

This will get you the disk-id, which in this case is drive-virtio-disk1.

node-6$ virsh qemu-monitor-command instance-0012 --hmp block_resize 
drive-virtio-disk1 100G

Finally, you need to perform a drive rescan on the actual instance and resize 
and extend the file-system. This will be OS specific.

I've tested this a few times and it seems very reliable.

Taylor Bertie
Enterprise Support Infrastructure Engineer

Mobile +64 27 952 3949
Phone +64 4 462 5030
Email taylor.ber...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz

Attention:
This email may contain information intended for the sole use of
the original recipient. Please respect this when sharing or
disclosing this email's contents with any third party. If you
believe you have received this email in error, please delete it
and notify the sender or postmas...@solnetsolutions.co.nz as
soon as possible. The content of this email does not necessarily
reflect the views of Solnet Solutions Ltd.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PTL/TC candidate workflow proposal for next elections

2015-08-21 Thread Anita Kuno
On 08/21/2015 11:27 AM, Jeremy Stanley wrote:
 On 2015-08-21 14:20:00 + (+), Tristan Cacqueray wrote:
 [...]
 * A check job verifies if the candidate is valid (has ATC and
   contributor to the project)
 [...]
 Automated jobs would be great, but the first iteration could be
 managed using manual tools.
 [...]
 
 Yep, the tricky bit here is in automating the confirmation. What are
 election officials normally doing to manually accomplish this?
 

Personally I would recommend that the election officials have
verification permissions on the proposed repo and the automation step is
skipped to begin with as a way of expediting the repo creation. Getting
the workflow in place in enough time that potential candidates can
familiarize themselves with the change, is of primary importance I feel.
Automation can happen after the workflow is in place.

Thanks,
Anita.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] netaddr and abbreviated CIDR format

2015-08-21 Thread Sean M. Collins
[Resending - since I don't think my mail client actually sent this the
first time]


While reviewing https://review.openstack.org/#/c/204459/ - I noticed
that one of the unit tests is passing an IP address 1/32 - so I went
and looked up the constructor for netaddr.IPNetwork, which has a feature that 
expands a string into a prefix.

http://pythonhosted.org//netaddr/api.html?highlight=abbreviated%20cidr#ip-networks-and-subnets

Putting it into my REPL:

http://paste.openstack.org/show/421041/


So - is this an actual IP address? I could be horribly wrong, but it
doesn't look like one to me - especially since built in tools like ping
don't appear to like it.

scollins@Sean-Collins-MBPr15 ~ » ping 1/32
ping: cannot resolve 1/32: Unknown host

Although, ping has it's own interesting behavior.

scollins@Sean-Collins-MBPr15 ~ » ping 1 
68 ↵
PING 1 (0.0.0.1): 56 data bytes
ping: sendto: No route to host
ping: sendto: No route to host
Request timeout for icmp_seq 0
^C
--- 1 ping statistics ---
2 packets transmitted, 0 packets received, 100.0% packet loss
scollins@Sean-Collins-MBPr15 ~ » ping 60
 2 ↵
PING 60 (0.0.0.60): 56 data bytes
ping: sendto: No route to host
ping: sendto: No route to host
Request timeout for icmp_seq 0
^C
--- 60 ping statistics ---
2 packets transmitted, 0 packets received, 100.0% packet loss


Oh, also spelunking through the code of Netaddr, it looks like this
option is going to be deprecated?

https://github.com/drkjam/netaddr/blob/bfba0b80c2e88b6e00ca7a870998b630d7c29734/netaddr/ip/__init__.py#L776

Which calls into:

https://github.com/drkjam/netaddr/blob/bfba0b80c2e88b6e00ca7a870998b630d7c29734/netaddr/ip/__init__.py#L1438

So - the tl;dr is that I don't think that we should accept inputs like
the following:

x   - 192
x/y - 10/8
x.x/y   - 192.168/16
x.x.x/y - 192.168.0/24

which are equivalent to::

x.0.0.0/y   - 192.0.0.0/24
x.0.0.0/y   - 10.0.0.0/8
x.x.0.0/y   - 192.168.0.0/16
x.x.x.0/y   - 192.168.0.0/24

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][SR-IOV]How to assign VF to a VM?

2015-08-21 Thread Beliveau, Ludovic
I believe you are getting this problem because you are trying to use SR-IOV 
over a flat network, which is not supported.

From the logs:

2015-08-21 04:29:44.619 9644 DEBUG neutron.plugins.ml2.managers 
[req-314733e3-17ab-4e20-951a-0c75744016b5 ] Attempting to bind port 
620187c5-b4ac-4aca-bdeb-96205503344d on host compute for vnic_type direct with 
profile {pci_slot: :09:11.5, physical_network: external, 
pci_vendor_info: 8086:1520} bind_port 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:611
2015-08-21 04:29:44.619 9644 DEBUG neutron.plugins.ml2.managers 
[req-314733e3-17ab-4e20-951a-0c75744016b5 ] Attempting to bind port 
620187c5-b4ac-4aca-bdeb-96205503344d on host compute at level 0 using segments 
[{'segmentation_id': None, 'physical_network': u'external', 'id': 
u'f3dee69f-ee4a-4c1b-bfa9-05689dc9b07b', 'network_type': u'flat'}] 
_bind_port_level 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:628

You should try to use a segmented/VLAN based network.

Regards,
/ludovic

From: Moshe Levi [mailto:mosh...@mellanox.com]
Sent: Friday, August 21, 2015 5:15 AM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operators
Subject: Re: [openstack-dev] [Neutron][SR-IOV]How to assign VF to a VM?

The problem is the sriov mechanism drive failed to bind the port.

For the log I see that you are working with agent_required=True, but the device 
mapping is empty {u'devices': 0, u'device_mappings': {}
Please check the agent configuration file see that you have the following
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
[sriov_nic]
physical_device_mappings = physnet1:eth1
exclude_devices =

also can you send the output of “ps �Cef | grep neutron-sriov-nic-agent” 
command?



From: 于洁 [mailto:16189...@qq.com]
Sent: Friday, August 21, 2015 12:01 PM
To: openstack-operators 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org;
 openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][SR-IOV]How to assign VF to a VM?

Hi all,

I try to configure SRIOV on OpenStack Kilo referring the information below.
http://www.qlogic.com/solutions/Documents/UsersGuide_OpenStack_SR-IOV.pdf
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking

Until creating port it works well. But after creating VM using the port created 
before, it was in the state of ERROR. Below is the port information:
neutron port-show 620187c5-b4ac-4aca-bdeb-96205503344d
+---+--+
| Field | Value 
   |
+---+--+
| admin_state_up| True  
   |
| allowed_address_pairs |   
   |
| binding:host_id   | compute   
   |
| binding:profile   | {pci_slot: :09:11.5, physical_network: 
external, pci_vendor_info: 8086:1520} |
| binding:vif_details   | {}
   |
| binding:vif_type  | binding_failed
   |
| binding:vnic_type | direct
   |
| device_id | baab9ba5-80e8-45f7-b86a-8ac3ce8ba944  
   |
| device_owner  | compute:None  
   |
| extra_dhcp_opts   |   
   |
| fixed_ips | {subnet_id: 86849224-a0a7-4059-a6b0-689a2b35c995, 
ip_address: 10.254.4.64}   |
| id| 620187c5-b4ac-4aca-bdeb-96205503344d  
   |
| mac_address   | fa:16:3e:8a:92:9b 
   |
| name  |   
   |
| network_id| db078c2d-63f1-40c0-b6c3-b49de487362b  
   |
| security_groups   | 8e12a661-09b5-41ac-ade8-fddf6d997262  
   |
| status| DOWN  
   |
| tenant_id

Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-21 Thread Ben Nemec
On 08/19/2015 12:22 PM, Dougal Matthews wrote:
 
 
 
 
 - Original Message -
 From: Dmitry Tantsur dtant...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, 19 August, 2015 5:57:36 PM
 Subject: Re: [openstack-dev] [TripleO] Moving instack upstream

 On 08/19/2015 06:42 PM, Derek Higgins wrote:
 On 06/08/15 15:01, Dougal Matthews wrote:
 - Original Message -
 From: Dan Prince dpri...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Thursday, 6 August, 2015 1:12:42 PM
 Subject: Re: [openstack-dev] [TripleO] Moving instack upstream

 snip

 I would really like to see us rename python-rdomanager-oscplugin. I
 think any project having the name RDO in it probably doesn't belong
 under TripleO proper. Looking at the project there are some distro
 specific things... but those are fairly encapsulated (or could be made
 so fairly easily).

 I agree, it made sense as it was the entrypoint to RDO-Manager. However,
 it could easily be called the python-tripleo-oscplugin or similar. The
 changes would be really trivial, I can only think of one area that
 may be distro specific.

 I'm putting the commit together now to pull these repositories into
 upstream tripleo are we happy with the name python-tripleo-oscplugin ?

 Do we really need this oscplugin postfix? It may be clear for some of
 us, but I don't that our users know that OSC means OpenStackClient, and
 that oscplugin designates something that adds features to openstack
 command. Can't we just call it python-tripleo? or maybe even just
 tripleo?
 
 +1 to either.
 
 Having oscplugin in the name just revealed an implementation detail, there
 may be a point where for some reason everyone moves away from OSC.

FWIW, I would prefer tripleo-client.  That's more in line with what the
other projects do, and doesn't carry the potential for confusion that
just naming it tripleo would.

 




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [third-party] ProphetStor CI account

2015-08-21 Thread Asselin, Ramy
HI Rick,

Let's keep this on the list so that others can benefit or chip in ideas.
If you cannot subscribe to the list, ask the folks on freenode irc in 
#openstack-infra.

You should set in zuul.conf the [merger] zuul_url to your local zuul's url. [1]
E.g.
[merger]
zuul_url=http://your_ip_or fqdn/p/

Please use export GIT_BASE=https://git.openstack.org. This will reduce the load 
on OpenStack's gerrit server and point you to a more stable GIT farm that can 
better handle the CI load. This will help your CI's success rate (by avoiding 
timeouts and intermittent errors) and reduce your ci test setup time.

Ramy

[1] http://docs.openstack.org/infra/zuul/zuul.html#merger





From: Rick Chen [mailto:rick.c...@prophetstor.com]
Sent: Thursday, August 20, 2015 8:53 PM
To: Asselin, Ramy ramy.asse...@hp.com
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

Hi Ramy:
Can you provide detail information how to solve it?
Yes, I use zuul. My zuul.conf use default zuul_url value.
I thought I use build shell script to pull this patch. I had add 
export GIT_BASE=https://review.openstack.org/p; in the build shell script.
Does it wrong?

From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
Sent: Thursday, August 20, 2015 11:12 PM
To: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

Hi Rick,

Thank you for adding this: Triggered by: https://review.openstack.org/203895 
patchset 3
Where do you pull down this patch in the log files?
Normally it gets pulled down during the setup-workspace script, but here you're 
referencing openstack's servers which is not correct. [1]
Are you using zuul? If so, your zuul url should be there.
If not, there should be some other place in your scripts that pull down the 
patch.

Ramy

[1] 
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5117/logs/devstack-gate-setup-workspace-new.txt.gz#_2015-08-20_12_03_24_813

From: Rick Chen [mailto:rick.c...@prophetstor.com]
Sent: Thursday, August 20, 2015 6:27 AM
To: Asselin, Ramy
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account


HI Ramy:

Sorry, I did not make sure the mail sent, because I did not receive my 
mail from openstack dev mail list group. So I direct send mail to your 
private mail account.

Thank you for your guidance. I followed your suggestion.

Please reference below link:


http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5117/console.html


my gerrit account:  prophetstor-ci

  gerrit account email:
prophetstor...@prophetstor.commailto:prophetstor...@prophetstor.com



-Original Message-

From: Asselin, Ramy [mailto:ramy.asse...@hp.com]

Sent: Wednesday, August 19, 2015 10:10 PM

To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
'Mike Perez' thin...@gmail.commailto:thin...@gmail.com

Subject: Re: [openstack-dev] [cinder] [third-party] ProphetStor CI account



Hi Rick,



Huge improvement. Log server is looking great! Thanks!



Next question is what (cinder) patch set is that job running?

It seems to be cinder master [1].

Is that intended? That's fine to validate general functionality, but eventually 
it needs to run the actual cinder patch set under test.



It's helpful to include a link to the patch that invoked the job at the top of 
the console.log file, e.g. [2].



Ramy



[1] 
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5111/logs/devstack-gate-setup-workspace-new.txt.gz#_2015-08-19_18_27_38_953

[2] 
https://github.com/rasselin/os-ext-testing/blob/master/puppet/modules/os_ext_testing/templates/jenkins_job_builder/config/macros.yaml.erb#L93



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] modifying the 'is it packaged' test

2015-08-21 Thread Robert Collins
On 22 August 2015 at 09:08, Doug Hellmann d...@doughellmann.com wrote:
 Excerpts from Robert Collins's message of 2015-08-20 15:24:03 +1200:
 We currently have a test where we ask if things are packaged in
 distros. 
 http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n268

 I think we should modify that, in two ways.

 The explanation for the question ignores a fairly large audience of
 deployers who don't wait for distributions - so they too need to
 package things, but unlike distributions packaging stuff is itself
 incidental to their business, rather than being it. So I think we
 should consider their needs too.

 Secondly, all the cases of this I've seen so far we've essentially
 gone 'sure, fine'. I think thats because there's really nothing to
 them.

 So I think the test should actually be something like:
 Apply caution if it is not packaged AND packaging it is hard.
 Things that make packaging a Python package hard:
  - nonstandard build systems
  - C dependencies that aren't already packaged
  - unusual licences

 E.g. things which are easy, either because they can just use existing
 dependencies, or they're pure python, we shouldn't worry about.

 -Rob


 I think this interpretation is fine. It's more or less what I've been
 doing anyway.

 Is it safe to assume that if a package is available on PyPI and can be
 installed with pip, packaging it for a distro isn't technically
 difficult? (It might be difficult due to vendoring, licensing, or some
 other issue that would be harder to test for.)

Licensing we already assess separately, anything we are ok with
(Apache2, MIT, BSD) distros and operators should fine easy.

Operators shouldn't be caring about vendoring: being on the CD train
is exactly what vendoring aims at [reliability on the assumption that
you'll upgrade all the things as fast as possible to ensure security].

Distributors probably care about vendoring, but IMO we should ignore
that. Projects that vendor do so with consideration for stability and
user experience - and even so is pretty rare in the Python space.
Distros are making a different choice - which is their right - but its
strictly additional work that has significant costs, and they've
already factored in that cost in aggregate in choosing to devendor
things.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally][Meeting][Agenda]

2015-08-21 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify by whom led this topic. Add some information about topic(links,
etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release] ACL for library-release team for release:managed projects

2015-08-21 Thread Davanum Srinivas
Folks,

In the governance repo a number of libraries are marked with
release:managed tag:
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

However, some of these libraries do not have appropriate ACL in the
project:config repo:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack

For example a quick scan shows that the following repos are marked
release:managed and do not have the right ACL:
python-kiteclient
python-designateclient
python-ironic-inspector-client
python-manilaclient
os-client-config
automaton
python-zaqarclient

So PTL's, either please fix the governance repo to remove release:managed
or add appropriate ACL in the project-config repo as documented in:
http://docs.openstack.org/infra/manual/creators.html#creation-of-tags

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Update on Angular Identity work

2015-08-21 Thread Tripp, Travis S

I definitely think the two panel tables in tree now (images and users) should 
be reduced down in the number of html partials.  On initial glance, it seemed 
pretty easy to just look at the files and know where things are.  However, in 
practice, it makes it error prone and harder to see everything when the header 
file is separated from the row file. I will put up a patch that at least 
collapses the HTML fragments down on images, so that can be seen.

I also think that more sections of the html should be reduced to additional 
fine grained directives, such as the table footer directive Cindy has nearly 
ready. And these finer grained sections could be combined into a template used 
by a *basic* full table directive as mentioned below in option 1.

Option 1’s primary strength actually is that it is more rigid and more contract 
like. The data and the html are mostly separated, meaning you change the data 
inputs and can centrally control the template. I think for very simple tables, 
option 1 is probably better. However, that also becomes a firewall of sorts for 
customizability and ease of making changes.

When it comes to composability, reusability, extensibility, customizability, 
and readability I don’t think option 1 handles all those aspects better.  To 
achieve simple results, I think it actually can later lead to a lot of 
complexity in JS that could be solved directly by modifying html.

When I looked at the option 1 below, and thought about how we have different 
representations of data that should go in table cells that would need to be 
handled. I also did not see any mention of how the collapse / expand details 
drawer would be handled and I actually have a lot of concerns on how that would 
really work or be simple to use or achieve, because the collapse expand details 
may differ quite a bit on what we want to display.

We’ve already used the table drawer in a number of ways based on the data we 
want to show (horizontal property listings, vertical property listings, qutoa 
charts, nested tables, metadata display widgets) and it was easy to do for each 
case because we had direct control of the HTML and could directly pass the data 
needed for the various widgets to them. So, with option 1, we’d have to figure 
out how to make that really easy to do.

Below is something I just sketched out in the last few minutes to maybe help 
explain it.  I’ve also put in a reference to a number of the existing details 
drawers following that:

ctrl.columns = [
{
name : '1’,

“displayName: gettext('column 1’),

“headerClasses: extraClassesHere to add”,

rowClasses: extraClassesHere to add,

permissions: blah, blah

data: What goes here? How should this be formatted? Is it HTML? Is 
it a list?”,

dataType: I guess we need a field to specify the format of the data”,

template: Or maybe this column needs it own custom formatting, so we 
have to pass in the template here.”,

responsivePriority: “5”,

responsiveHandling: Need some strategy for what to do when columns 
appear / disappear. - Do they go in the collapsible table drawer? Where should 
they be placed? What format should they be placed in? How should it interact 
with what else is in the detail drawer?,


},

{etc : etc}
]

details-drawer: How is the detail drawer handled? This can be everything 
from quota charts, to metadata display widgets, to raw properties, to an inner 
table (security group detail drawer is actually in inner table of the security 
group rules). How do we have control over what is in here and what the data 
looks like in here? And should all the data be preloaded or fetched upon 
expansion?,

div ng-controller=table.controller.js as ctrl
horizon-table
   columns =ctrl.columns
   batch-actions=ctrl.batchActionList
   row-actions=“ctrl.rowActionList”
   details=“Ummm, how do I pass through all the various detail formats?
/horizon-table
/div

A few detail table drawer examples:

Flavors: Quota charts and Metadata Display

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/flavor/select-flavor-table.html#L101-L120

Security Groups: Nested security groups

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/security-groups/security-group-details.html

Keypairs: Raw output

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/keypair/keypair-details.html

NG Images: Responsive columns mixed with additional data

https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/images/table/images-table-row-details.html



From: Thai Q Tran
Reply-To: OpenStack List
Date: Friday, August 21, 2015 at 1:38 PM
To: OpenStack List
Subject: Re: [openstack-dev] [Horizon] Update on 

Re: [openstack-dev] [Neutron] netaddr and abbreviated CIDR format

2015-08-21 Thread Sean M. Collins
Here's what the implicit_prefix arg for the IPNetwork constructor does.

Python 2.7.6 (default, Sep  9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type help, copyright, credits or license for more information.
 import netaddr
 a = netaddr.IPNetwork(1, implicit_prefix=True)
 a
IPNetwork('1.0.0.0/8')
 a = netaddr.IPNetwork(1, implicit_prefix=False)
 a
IPNetwork('1.0.0.0/32')


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Resend] [api] Who owns API versioning and deprecation policy?

2015-08-21 Thread Geoff Arnold
Thanks. I’ll figure out which of my colleagues should get involved.

Geoff

 On Aug 21, 2015, at 2:10 PM, Everett Toews everett.to...@rackspace.com 
 wrote:
 
 On Aug 21, 2015, at 3:13 PM, Geoff Arnold ge...@geoffarnold.com wrote:
 
 After reading the following pages, it’s unclear what the current API 
 deprecation policy is and who owns it. (The first spec implies that a change 
 took place in May 2015, but is silent on what and why.) Any hints? An 
 authoritative doc would be useful, something other than an IRC log or 
 mailing list reference.
 
 Geoff
 
 http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
 
 https://wiki.openstack.org/wiki/API_Working_Group
 
 https://wiki.openstack.org/wiki/Application_Ecosystem_Working_Group
 
 The API Working Group does. 
 
 Guidelines for microversioning [1] and when to bump a microversion [2] are 
 currently in review. Naturally your feedback is welcome.
 
 We have yet to provide guidance on deprecation. If you’d like to create a 
 guideline on deprecation, here’s How to Contribute [3]. If you want to throw 
 some ideas around we’re in #openstack-api or feel free to drop by one of our 
 meetings [4].
 
 Everett
 
 [1] https://review.openstack.org/#/c/187112/
 [2] https://review.openstack.org/#/c/187896/
 [3] https://wiki.openstack.org/wiki/API_Working_Group#How_to_Contribute
 [4] https://wiki.openstack.org/wiki/Meetings/API-WG
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] modifying the 'is it packaged' test

2015-08-21 Thread Matthew Thode
On 08/21/2015 04:08 PM, Doug Hellmann wrote:
 Excerpts from Robert Collins's message of 2015-08-20 15:24:03 +1200:
 We currently have a test where we ask if things are packaged in
 distros. 
 http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n268

 I think we should modify that, in two ways.

 The explanation for the question ignores a fairly large audience of
 deployers who don't wait for distributions - so they too need to
 package things, but unlike distributions packaging stuff is itself
 incidental to their business, rather than being it. So I think we
 should consider their needs too.

 Secondly, all the cases of this I've seen so far we've essentially
 gone 'sure, fine'. I think thats because there's really nothing to
 them.

 So I think the test should actually be something like:
 Apply caution if it is not packaged AND packaging it is hard.
 Things that make packaging a Python package hard:
  - nonstandard build systems
  - C dependencies that aren't already packaged
  - unusual licences

 E.g. things which are easy, either because they can just use existing
 dependencies, or they're pure python, we shouldn't worry about.

 -Rob

 
 I think this interpretation is fine. It's more or less what I've been
 doing anyway.
 
 Is it safe to assume that if a package is available on PyPI and can be
 installed with pip, packaging it for a distro isn't technically
 difficult? (It might be difficult due to vendoring, licensing, or some
 other issue that would be harder to test for.)
 
 Doug
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Packaging for us is fairly easy, but it is annoying to have to add 5-6
deps each release, (which means we are adding cruft over time).

-- 
Matthew Thode (prometheanfire)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] modifying the 'is it packaged' test

2015-08-21 Thread Robert Collins
On 22 August 2015 at 10:57, Matthew Thode prometheanf...@gentoo.org wrote:
 Packaging for us is fairly easy, but it is annoying to have to add 5-6
 deps each release, (which means we are adding cruft over time).

We're adding functionality by bringing in existing implementations.
Surely thats better than reinventing *everything* ?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-21 Thread Dave Walker
On 21 August 2015 at 11:38, Thierry Carrez thie...@openstack.org wrote:
SNIP
 Since then, replying to another concern about common downstream
 reference points, we moved to tagging everything, then replying to
 Clark's pollution remark, to tag from time to time. That doesn't
 remove the need to *conveniently* ship the best release notes we can
 with every commit. Including them in every code tarball (and relying on
 well-known python sdist commands to generate them for those consuming
 the git tree directly) sounded like the most accessible way to do it,
 which the related thread on the Ops ML confirmed. But then I'm (and
 maybe they are) still open to alternative suggestions...

This is probably a good entry point for my ACTION item from the
cross-project meeting:

I disagree that time to time tagging makes sense in what we are
trying to achieve.  I believe we are in agreement that we want to move
way from co-ordinated releases and treat each commit as an accessible
release.  Therefore, tagging each project at arbitrary times
introduces snowflake releases, rather than the importance being on
each commit being a release.

I agree that this would take away the 'co-ordinated' part of the
release, but still requires release management of each project (unless
the time to time is automated), which we are not sure that each
project will commit to.

If we are treating each commit to be a release, maybe we should just
bite the bullet and enlarge the ref tag length.  I've not done a
comparison of what this would look like, but I believe it to be rare
that people look at the list anyway.  Throwing in a | grep -v
^$RELEASE*, and it becomes as usable as before.  We could also
expunge the tags after the release is no longer supported by upstream.

In my mind, we are then truly treating each commit as a release AND we
benefit from not needing hacky tooling to fake this.

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] DVR L2 agent is removing the br-int OVS flows

2015-08-21 Thread Anna Kamyshnikova
Hi, Artur!

Thanks, for bringing this up! I missed that. I push change for that in a
short time.

On Fri, Aug 21, 2015 at 1:35 PM, Korzeniewski, Artur 
artur.korzeniew...@intel.com wrote:

 Hi all,

 After merging the “Graceful ovs-agent restart”[1] (great work BTW!), I’m
 seeing in DVR L2 agent code place where flows on br-int is removed in old
 style:



 File
 /neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py

 200 def setup_dvr_flows_on_integ_br(self):

 201 '''Setup up initial dvr flows into br-int'''

 202 if not self.in_distributed_mode():

 203 return

 204

 205 LOG.info(_LI(L2 Agent operating in DVR Mode with MAC %s),

 206  self.dvr_mac_address)

 207 # Remove existing flows in integration bridge

 208 self.int_br.delete_flows()



 This is kind of bummer when seeing the effort to preserve the flows in [1].

 This should not affect VM network access, since the br-tun is configured
 properly and br-int is in learning mode.



 Should this be fixed in Liberty cycle?



 This is something similar to submitted bug:
 https://bugs.launchpad.net/neutron/+bug/1436156



 [1] https://bugs.launchpad.net/neutron/+bug/1383674



 Regards,

 Artur Korzeniewski

 

 Intel Technology Poland sp. z o.o.

 KRS 101882

 ul. Slowackiego 173, 80-298 Gdansk



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] modifying the 'is it packaged' test

2015-08-21 Thread Dave Walker
On 22 August 2015 at 00:04, Matthew Thode prometheanf...@gentoo.org wrote:
 On 08/21/2015 05:59 PM, Robert Collins wrote:
 On 22 August 2015 at 10:57, Matthew Thode prometheanf...@gentoo.org wrote:
 Packaging for us is fairly easy, but it is annoying to have to add 5-6
 deps each release, (which means we are adding cruft over time).

 We're adding functionality by bringing in existing implementations.
 Surely thats better than reinventing *everything* ?

 -Rob

 totally, more of a minor annoyance :P

A strong reason that requirements was created was to give distros a
voice and avoid incompatible versions, which was more of a problem for
distros than it was for each different service at that point.

I'm not sure that a requirement has ever been not included because it
*wasn't* packaged, but perhaps because it *couldn't* be packaged.  Is
there an example that has caused you to raise this?

The is-it-packaged-test was added at a time where large changes were
happening in OpenStack right up to the (release) wire and cause scary
changes for distros that were tracking the release.  Now, Feature
development has become more mature with the scary stuff being front
loaded, I'm not quite sure this is such a problem.

The release schedule used to document a DepFreeze[0] to avoid nasty
surprises for distros, which used to be at the same point of
FeatureFreeze[1].  This reference seems to have been removed from the
last few cycles, but I would suggest that it could be re-added.

[0] https://wiki.openstack.org/wiki/DepFreeze
[1] https://wiki.openstack.org/wiki/FeatureFreeze

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Microsoft CI Still Disabled

2015-08-21 Thread Mike Perez
The Microsoft CI has been disabled since July 22nd [1].

Last I heard from the Cinder midcycle sprint, this CI was still not
ready and it hasn't been for 30 days now.

Where are we with things, and why has communication been so poor with
the Cloud Base solutions team?

[1] - 
http://lists.openstack.org/pipermail/third-party-announce/2015-July/000249.html

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][third-party][CI][nodepool]Using Nodepool for creating slaves.

2015-08-21 Thread Asselin, Ramy
Hi Abhi,

That’s correct. The ‘elements’ are used to setup the Jenkins slave.

Regarding how the image is built. I think the instructions are clear, but I 
wrote them. Let me know which part is not clear.

Step 3 in the readme link your reference is the command to build the image.
nodepool image-build image-name
The image-name is what you have in the nodepool.yaml file.

You may run into issues because the elements defined by openstack-infra  may 
not work in your environment. In that case, you’ll have to override them.

Step 4 starts nodepool, which will use the image built in step 3 and upload it 
to your providers.
You can manually upload it using
nodepool image-upload all image-name

Ramy

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Monday, August 17, 2015 10:46 PM
To: openstack-in...@lists.openstack.org; OpenStack Development Mailing List 
(not for usage questions) openstack-dev@lists.openstack.org
Cc: Asselin, Ramy ramy.asse...@hp.com; Abhishek Shrivastava 
abhis...@cloudbyte.com
Subject: Re: [openstack-infra][third-paty][CI][nodepool]Using Nodepool for 
creating slaves.

Also adding the following:

  *   
https://github.com/openstack-infra/project-config/tree/master/nodepool/elements
Does this means that we don't have to take care of creating slaves(i.e; 
installing slave using scripts as we have done in master) and it will be done 
automatically, and also we don't have to configure slaves in Jenkins.

A bit confusing for me so if anyone can explain it to me then it will be very 
helpful.

On Tue, Aug 18, 2015 at 11:11 AM, Abhishek Shrivastava 
abhis...@cloudbyte.commailto:abhis...@cloudbyte.com wrote:
Hi Folks,

I was going through Ramy's guide for setting up CI, there I found out the 
following:

  *   
https://github.com/rasselin/os-ext-testing#setting-up-nodepool-jenkins-slaves
But I don't get the fact on how to create the image, also what major settings 
need to be done to make the config perfect for use. Can anyone help me with 
that?

--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com



--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Extending attached disks

2015-08-21 Thread Taylor . Bertie
For RBDs it IS as simple as making calls to virsh after an attached volume is 
extended. I've done it half a dozen time with no intermediate steps and it 
works. I'd love to test it more robustly, obviously, but unfortunately I got 
bigger fish to fry with BAU.

iSCSI might involve more work, I acknowledge that, but there is nothing wrong 
with putting the framework in place now and throwing an unsupported volume 
type error message if we haven't worked out the best method for doing this for 
a particular type.

The way I see it, the only ones that are going to cause us problems are ones 
that require the host to suspend the disk prior to operating on it. In other 
words if the notification to the host can't be done atomically, that could 
definitely cause issues.

However, all the examples I have seem implemented in OpenStack volumes thus far 
(iSCSI, RDB) are either atomic or no notification required (in the case of 
RBD). Even Multipath is atomic (granted, it's multiple chained atomic 
operations, but still, they won't be left in an irrecoverable failure state).

Yes, the page you linked does warn about the issue when there is no path the 
device, however I think that if you're trying to resize a volume the compute 
node can't connect to, you got bigger problems (that is to say, throwing an 
error here is perfectly reasonable).

Regards,

Taylor Bertie
Enterprise Support Infrastructure Engineer

Mobile +64 27 952 3949
Phone +64 4 462 5030
Email taylor.ber...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz 


-Walter A. Boring IV walter.bor...@hp.com wrote: -
To: openstack-dev@lists.openstack.org
From: Walter A. Boring IV walter.bor...@hp.com
Date: 2015-08-22 7:13
Subject: Re: [openstack-dev] [nova][cinder] Extending attached disks

This isn't as simple as making calls to virsh after an attached volume 
is extended on the cinder backend, especially when multipath is involved.
You need the host system to understand that the volume has changed size 
first, or virsh will really never see it.

For iSCSI/FC volumes you need to issue a rescan on the bus (iSCSI 
session, FC fabric),  and then when multipath is involved, it gets quite 
a bit more complex.

This lead to one of the sticking points with doing this at all, is 
because when cinder extends the volume, it needs to tell nova that it 
has happened, and the nova (or something on the compute node), will have 
to issue the correct commands in sequence for it all to work.

You'll also have to consider multi-attached volumes as well, which adds 
yet another wrinkle.

A good quick source of some of the commands and procedures that are 
needed you can see here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/online-logical-units.html


You can see that volumes with multipath requires a lot of hand holding 
to be done correctly.  It's non trivial.  I see this as being very error 
prone, and any failure
in the multipath process could lead to big problems :(

Walt
 Hi everyone,

 Apologises for the duplicate send, looks like my mail client doesn't create 
 very clean HTML messages. Here is the message in plain-text. I'll make sure 
 to send to the list in plain-text from now on.

 In my current pre-production deployment we were looking for a method to live 
 extend attached volumes to an instance. This was one of the requirements for 
 deployment. I've worked with libvirt hypervisors before so it didn't take 
 long to find a workable solution. However I'm not sure how transferable this 
 will be across deployment models. Our deployment model is using libvirt for 
 nova and ceph for backend storage. This means obviously libvirt is using rdb 
 to connect to volumes.

 Currently the method I use is:

 - Force cinder to run an extend operation.
 - Tell Libvirt that the attached disk has been extended.

 It would be worth discussing if this can be ported to upstream such that the 
 API can handle the leg work, rather than this current manual method.

 Detailed instructions.
 You will need: volume-id of volume you want to resize, hypervisor_hostname 
 and instance_name from instance volume is attached to.

 Example: extending volume f9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to 
 instance-0012 on node-6 to 100GB

 $ cinder reset-state --state available f9fa66ab-b29a-40f6-b4f4-e9c64a155738
 $ cinder extend f9fa66ab-b29a-40f6-b4f4-e9c64a155738 100
 $ cinder reset-state --state in-use f9fa66ab-b29a-40f6-b4f4-e9c64a155738

 $ssh node-6
 node-6$ virsh qemu-monitor-command instance-0012 --hmp info block | 
 grep f9fa66ab-b29a-40f6-b4f4-e9c64a155738
 drive-virtio-disk1: removable=0 io-status=ok 
 file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=keyhere==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789
  ro=0 drv=raw 

Re: [openstack-dev] [requirements] modifying the 'is it packaged' test

2015-08-21 Thread Matthew Thode
On 08/21/2015 05:59 PM, Robert Collins wrote:
 On 22 August 2015 at 10:57, Matthew Thode prometheanf...@gentoo.org wrote:
 Packaging for us is fairly easy, but it is annoying to have to add 5-6
 deps each release, (which means we are adding cruft over time).
 
 We're adding functionality by bringing in existing implementations.
 Surely thats better than reinventing *everything* ?
 
 -Rob
 
totally, more of a minor annoyance :P

-- 
Matthew Thode (prometheanfire)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Implementing app and workflow resources

2015-08-21 Thread Devdatta Kulkarni
Hey Solum team,


Recently we had accepted a spec to add new API resources to Solum [1].

Thanks to Ed Cranford, the 'app' resource is  already implemented.

I am working on implementing the workflow resource.

I have created a spec [2] outlining the approach taken for adding the workflow 
resource

and connecting it to the Solum engine (worker and deployer).

If interested, please take a look at the spec and the patches listed there.

I have also listed steps to try out the new resources in Vagrant environment.


Feedback is welcome.


Regards,

Devdatta


[1] 
https://github.com/stackforge/solum-specs/blob/master/specs/liberty/app-resource.rst


[2] https://review.openstack.org/215832
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Can`t find Project ID

2015-08-21 Thread Asselin, Ramy
Hi Steve, Xiexs,

Actually this issue hit us on Tuesday afternoon.
Root cause is not known still, but the workaround was an update to nodepool to 
allow the project-name setting as stated below.
I updated the sample nodepool.yaml.erb as well. See this post I sent for the 
full details of the issue [1]

It would still be nice to know what the root cause is.

Thanks,
Ramy
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072556.html

P.S. Xiexs, in the future, it’s helpful to include the [third-party] and/or 
[infra] tags as well to get better attention since this issue impacts nodepool 
users, which is used by both of these teams.

From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com]
Sent: Thursday, August 20, 2015 11:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] Can`t find Project ID

Hi Steve,
   Thanks for your reply.

It`s a stupid mistake.
I have made an invalid configuration for the provider`s project-id in the 
nodepool.yaml as follows:
{ project-id: 'admin' }
But the correct setting should be:
{ Project-name: 'admin'}  or { project-id: '%= id of project admin %' }

To be honest, I was misled by the  sample file 
os-ext-testing-data/etc/nodepool/nodepool.yaml.erb,
in which the project-id is also configured with admin.


Xiexs

From: Steve Martinelli [mailto:steve...@ca.ibm.com]
Sent: Friday, August 21, 2015 1:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] Can`t find Project ID


For the rest of the mailing list, how was it resolved? :)

Thanks,

Steve Martinelli
OpenStack Keystone Core

[Inactive hide details for Xie, Xianshan ---2015/08/21 01:43:35 AM---Hi all,  
  This issue has already been resolved.]Xie, Xianshan ---2015/08/21 01:43:35 
AM---Hi all, This issue has already been resolved.

From: Xie, Xianshan xi...@cn.fujitsu.commailto:xi...@cn.fujitsu.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 2015/08/21 01:43 AM
Subject: Re: [openstack-dev] [keystone] Can`t find Project ID





Hi all,
This issue has already been resolved.
Thanks.


Xiexs

From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com]
Sent: Friday, August 21, 2015 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone] Can`t find Project ID

Hi, all,

I got following error message while running nodepoold:


2015-08-21 20:18:00,336 ERROR nodepool.NodePool: Exception cleaning up leaked 
nodes
Traceback (most recent call last):
File /home/nodepool/nodepool/nodepool.py, line 2399, in periodicCleanup
self.cleanupLeakedInstances()
File /home/nodepool/nodepool/nodepool.py, line 2410, in cleanupLeakedInstances
servers = manager.listServers()
File /home/nodepool/nodepool/provider_manager.py, line 570, in listServers
self._servers = self.submitTask(ListServersTask())
File /home/nodepool/nodepool/task_manager.py, line 119, in submitTask
return task.wait()
File /home/nodepool/nodepool/task_manager.py, line 57, in run
self.done(self.main(client))
File /home/nodepool/nodepool/provider_manager.py, line 136, in main
servers = client.nova_client.servers.list()
File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line 318, in 
nova_client
self.get_session_endpoint('compute')
File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line 811, in 
get_session_endpoint
Error getting %s endpoint: %s % (service_key, str(e)))
OpenStackCloudException: Error getting compute endpoint: Project ID not found: 
admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: 
req-fb986bff-3cad-48e1-9da9-915ac9ef5927)
---

And in my case, the output info with cli listed as follows:
$ openstack service list
| ID | Name | Type |
+--+--++
| 213a7ba8f0564523a3d2769f77621fde | nova | compute |

$ openstack project list
+--++
| ID | Name |
+--++
| 0a765fdfa79a438aae56202bdd5824e2 | admin |

$ keystone endpoint-list
+--+---+-+-+-+--+
| id | region | publicurl | internalurl | adminurl | service_id |
+--+---+-+-+-+--+
| d89b009e81f04a17a26fd07ffbf83efb | regionOne | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25(tenant_id)s
 | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25(tenant_id)s
 | 

Re: [openstack-dev] [cinder] I have a question about openstack cinder zonemanager driver

2015-08-21 Thread hao wang
I feel supporting multi drivers of zonemanager in one cinder.conf is
not much value.

How you can decide when to use driver-1 or driver-2?

2015-08-14 10:20 GMT+08:00 Chenying (A) ying.c...@huawei.com:
Hi,



 Jay S. Bryant, Daniel Wilson .Thank you for your reply. Now I know that in
 this case I need two cinder nodes, one for Brocade fabric and one for Cisco
 fabric.



 Do you consider that cinder zonemanager is  necessary to support
 multi-drivers from different vendors in one cinder.conf ?







 Thanks,



 ChenYing





 发件人: Jay S. Bryant [mailto:jsbry...@electronicjungle.net]
 发送时间: 2015年8月14日 0:36
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [cinder] I have a question about openstack cinder
 zonemanager driver.



 Danny is correct.  You cannot have two different Zone Manager drivers
 configured for one volume process.

 Jay

 On 08/13/2015 11:00 AM, Daniel Wilson wrote:

 I am fairly certain you cannot currently use two different FC switch zone
 drivers in one cinder.conf.  In this case it looks like you would need two
 cinder nodes, one for Brocade fabric and one for Cisco fabric.



 Thanks,

 Danny



 On Thu, Aug 13, 2015 at 2:43 AM, Chenying (A) ying.c...@huawei.com wrote:

 Hi, guys



  I am using Brocade FC switch in my OpenStack environment. I have a
 question about OpenStack cinder zonemanger driver.



 I find that [fc-zone-manager] can only configure one zone driver. If I want
 to use two FC switches from different vendors at the same time.



 One is Brocade FC switch, the other one is Cisco FC switch. Is there a
 method or solution configure two FC switch zone driver in one cinder.conf ?



 I want them both to support Zone Manager.










 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






 __

 OpenStack Development Mailing List (not for usage questions)

 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Wishes For You!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Cannot get compute endpoint when running nodepool.

2015-08-21 Thread Asselin, Ramy
Hi Tang,

Sorry I did not see this post. I started a thread here to explain the issue: [1]

Ramy

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072556.html

From: Tang Chen [mailto:tangc...@cn.fujitsu.com]
Sent: Thursday, August 20, 2015 7:13 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org; Asselin, Ramy ramy.asse...@hp.com; 
Abhishek Shrivastava abhis...@cloudbyte.com; Xie, Xianshan/谢 贤山 
xi...@cn.fujitsu.com
Subject: [CI] Cannot get compute endpoint when running nodepool.

Hi, all,

I got following error message while running nodepoold with nodepoold -d  $ 
DAEMON_ARGS


2015-08-21 20:18:00,336 ERROR nodepool.NodePool: Exception cleaning up leaked 
nodes
Traceback (most recent call last):
  File /home/nodepool/nodepool/nodepool.py, line 2399, in periodicCleanup
self.cleanupLeakedInstances()
  File /home/nodepool/nodepool/nodepool.py, line 2410, in 
cleanupLeakedInstances
servers = manager.listServers()
  File /home/nodepool/nodepool/provider_manager.py, line 570, in listServers
self._servers = self.submitTask(ListServersTask())
  File /home/nodepool/nodepool/task_manager.py, line 119, in submitTask
return task.wait()
  File /home/nodepool/nodepool/task_manager.py, line 57, in run
self.done(self.main(client))
  File /home/nodepool/nodepool/provider_manager.py, line 136, in main
servers = client.nova_client.servers.list()
  File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line 318, in 
nova_client
self.get_session_endpoint('compute')
  File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line 811, in 
get_session_endpoint
Error getting %s endpoint: %s % (service_key, str(e)))
OpenStackCloudException: Error getting compute endpoint: Project ID not found: 
admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: 
req-fb986bff-3cad-48e1-9da9-915ac9ef5927)
---

And in my case, the output info with cli listed as follows:
$ openstack service list
| ID   | Name | Type   |
+--+--++
| 213a7ba8f0564523a3d2769f77621fde | nova | compute|

$ openstack project list
+--++
| ID   | Name   |
+--++
| 0a765fdfa79a438aae56202bdd5824e2 | admin  |

$ keystone endpoint-list
+--+---+-+-+-+--+
|id|   region  |publicurl   
 |   internalurl   | adminurl   
 |service_id|
+--+---+-+-+-+--+
| d89b009e81f04a17a26fd07ffbf83efb | regionOne | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25%28tenant_id%29s
 | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25%28tenant_id%29s
 | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25%28tenant_id%29s
 | 213a7ba8f0564523a3d2769f77621fde |
+--+---+-+-+-+--+


Have you ever seen this error? And could you give me any advice to solve it?
Thanks in advance.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] I have a question about openstack cinder zonemanager driver

2015-08-21 Thread Angela Smith
The decision should be made based on what vendor switch you have in your SAN.

-Original Message-
From: hao wang [mailto:sxmatch1...@gmail.com] 
Sent: Friday, August 21, 2015 4:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] I have a question about openstack cinder 
zonemanager driver

I feel supporting multi drivers of zonemanager in one cinder.conf is not much 
value.

How you can decide when to use driver-1 or driver-2?

2015-08-14 10:20 GMT+08:00 Chenying (A) ying.c...@huawei.com:
Hi,



 Jay S. Bryant, Daniel Wilson .Thank you for your reply. Now I know 
 that in this case I need two cinder nodes, one for Brocade fabric and 
 one for Cisco fabric.



 Do you consider that cinder zonemanager is  necessary to support 
 multi-drivers from different vendors in one cinder.conf ?







 Thanks,



 ChenYing





 发件人: Jay S. Bryant [mailto:jsbry...@electronicjungle.net]
 发送时间: 2015年8月14日 0:36
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [cinder] I have a question about openstack 
 cinder zonemanager driver.



 Danny is correct.  You cannot have two different Zone Manager drivers 
 configured for one volume process.

 Jay

 On 08/13/2015 11:00 AM, Daniel Wilson wrote:

 I am fairly certain you cannot currently use two different FC switch 
 zone drivers in one cinder.conf.  In this case it looks like you would 
 need two cinder nodes, one for Brocade fabric and one for Cisco fabric.



 Thanks,

 Danny



 On Thu, Aug 13, 2015 at 2:43 AM, Chenying (A) ying.c...@huawei.com wrote:

 Hi, guys



  I am using Brocade FC switch in my OpenStack environment. I have 
 a question about OpenStack cinder zonemanger driver.



 I find that [fc-zone-manager] can only configure one zone driver. If I 
 want to use two FC switches from different vendors at the same time.



 One is Brocade FC switch, the other one is Cisco FC switch. Is there a 
 method or solution configure two FC switch zone driver in one cinder.conf ?



 I want them both to support Zone Manager.










 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






 __
 

 OpenStack Development Mailing List (not for usage questions)

 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Wishes For You!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Can`t find Project ID

2015-08-21 Thread Xie, Xianshan
Hi Steve,
   Thanks for your reply.

It`s a stupid mistake.
I have made an invalid configuration for the provider`s project-id in the 
nodepool.yaml as follows:
{ project-id: 'admin' }
But the correct setting should be:
{ Project-name: 'admin'}  or { project-id: '%= id of project admin %' }

To be honest, I was misled by the  sample file 
os-ext-testing-data/etc/nodepool/nodepool.yaml.erb,
in which the project-id is also configured with admin.


Xiexs

From: Steve Martinelli [mailto:steve...@ca.ibm.com]
Sent: Friday, August 21, 2015 1:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] Can`t find Project ID


For the rest of the mailing list, how was it resolved? :)

Thanks,

Steve Martinelli
OpenStack Keystone Core

[Inactive hide details for Xie, Xianshan ---2015/08/21 01:43:35 AM---Hi all,  
  This issue has already been resolved.]Xie, Xianshan ---2015/08/21 01:43:35 
AM---Hi all, This issue has already been resolved.

From: Xie, Xianshan xi...@cn.fujitsu.commailto:xi...@cn.fujitsu.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 2015/08/21 01:43 AM
Subject: Re: [openstack-dev] [keystone] Can`t find Project ID





Hi all,
This issue has already been resolved.
Thanks.


Xiexs

From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com]
Sent: Friday, August 21, 2015 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone] Can`t find Project ID

Hi, all,

I got following error message while running nodepoold:


2015-08-21 20:18:00,336 ERROR nodepool.NodePool: Exception cleaning up leaked 
nodes
Traceback (most recent call last):
File /home/nodepool/nodepool/nodepool.py, line 2399, in periodicCleanup
self.cleanupLeakedInstances()
File /home/nodepool/nodepool/nodepool.py, line 2410, in cleanupLeakedInstances
servers = manager.listServers()
File /home/nodepool/nodepool/provider_manager.py, line 570, in listServers
self._servers = self.submitTask(ListServersTask())
File /home/nodepool/nodepool/task_manager.py, line 119, in submitTask
return task.wait()
File /home/nodepool/nodepool/task_manager.py, line 57, in run
self.done(self.main(client))
File /home/nodepool/nodepool/provider_manager.py, line 136, in main
servers = client.nova_client.servers.list()
File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line 318, in 
nova_client
self.get_session_endpoint('compute')
File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line 811, in 
get_session_endpoint
Error getting %s endpoint: %s % (service_key, str(e)))
OpenStackCloudException: Error getting compute endpoint: Project ID not found: 
admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: 
req-fb986bff-3cad-48e1-9da9-915ac9ef5927)
---

And in my case, the output info with cli listed as follows:
$ openstack service list
| ID | Name | Type |
+--+--++
| 213a7ba8f0564523a3d2769f77621fde | nova | compute |

$ openstack project list
+--++
| ID | Name |
+--++
| 0a765fdfa79a438aae56202bdd5824e2 | admin |

$ keystone endpoint-list
+--+---+-+-+-+--+
| id | region | publicurl | internalurl | adminurl | service_id |
+--+---+-+-+-+--+
| d89b009e81f04a17a26fd07ffbf83efb | regionOne | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25(tenant_id)s
 | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25(tenant_id)s
 | 
http://controller:8774/v2/%(tenant_id)shttp://controller:8774/v2/%25(tenant_id)s
 | 213a7ba8f0564523a3d2769f77621fde |
+--+---+-+-+-+--+


Have you ever seen this error? And could you give me any advice to solve it?
Thanks in advance.


Xiexs__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [Neutron] Targeting Logging API for SG and FW rules feature to L-3 milestone

2015-08-21 Thread hoan...@vn.fujitsu.com
Good day,

 The specification and source codes will definitely reviewing/filing in next 
 week.
 #link
 http://eavesdrop.openstack.org/meetings/networking_fwaas/2015/network
 ing_fwaas.2015-08-19-23.59.log.html

 No - I did not say definitely - nowhere in that IRC log was that word used.

I'm sorry.  Yes, that should be probably.

--
Best regards,

Cao Xuan Hoang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Cannot get compute endpoint when running nodepool.

2015-08-21 Thread Jordan Pittier
Hi,
Please have a look at
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072556.html

Jordan

On Fri, Aug 21, 2015 at 4:12 AM, Tang Chen tangc...@cn.fujitsu.com wrote:

 Hi, all,



 I got following error message while running nodepoold with nodepoold -d  $
 DAEMON_ARGS



 

 2015-08-21 20:18:00,336 ERROR nodepool.NodePool: Exception cleaning up
 leaked nodes

 Traceback (most recent call last):

   File /home/nodepool/nodepool/nodepool.py, line 2399, in periodicCleanup

 self.cleanupLeakedInstances()

   File /home/nodepool/nodepool/nodepool.py, line 2410, in
 cleanupLeakedInstances

 servers = manager.listServers()

   File /home/nodepool/nodepool/provider_manager.py, line 570, in
 listServers

 self._servers = self.submitTask(ListServersTask())

   File /home/nodepool/nodepool/task_manager.py, line 119, in submitTask

 return task.wait()

   File /home/nodepool/nodepool/task_manager.py, line 57, in run

 self.done(self.main(client))

   File /home/nodepool/nodepool/provider_manager.py, line 136, in main

 servers = client.nova_client.servers.list()

   File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line
 318, in nova_client

 self.get_session_endpoint('compute')

   File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line
 811, in get_session_endpoint

 Error getting %s endpoint: %s % (service_key, str(e)))

 OpenStackCloudException: Error getting compute endpoint: Project ID not
 found: admin (Disable debug mode to suppress these details.) (HTTP 401)
 (Request-ID: req-fb986bff-3cad-48e1-9da9-915ac9ef5927)

 ---



 And in my case, the output info with cli listed as follows:

 $ openstack service list

 | ID   | Name | Type   |

 +--+--++

 | 213a7ba8f0564523a3d2769f77621fde | nova | compute|



 $ openstack project list

 +--++

 | ID   | Name   |

 +--++

 | 0a765fdfa79a438aae56202bdd5824e2 | admin  |



 $ keystone endpoint-list


 +--+---+-+-+-+--+

 |id|   region  |
   publicurl|   internalurl
 | adminurl|
 service_id|


 +--+---+-+-+-+--+

 | d89b009e81f04a17a26fd07ffbf83efb | regionOne |
 http://controller:8774/v2/%(tenant_id)s |
 http://controller:8774/v2/%(tenant_id)s |
 http://controller:8774/v2/%(tenant_id)s |
 213a7ba8f0564523a3d2769f77621fde |


 +--+---+-+-+-+--+





 Have you ever seen this error? And could you give me any advice to solve
 it?
 Thanks in advance.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL keys saving

2015-08-21 Thread Adam Heczko
Hi Stanislav,
agree that unification is very useful and it is right direction.
While designing implementation changes please remember that we should serve
cases:
1). There could be multiple environments served by one Fuel node. IMO we
should prepare a way to have different private key per environment.
In some cases private key will be common / shared for all environments, in
some cases no.
2). We should remember that there are various X.509 use cases for various
OpenStack services. Usually, X.509 is used only for TLS traffic encryption.
But in some cases we want to make authentication decision based on X.509
certificate.
This is useful for example with libvirt authentication, soon we'll have
mechanism for X.509 Keystone authentication.
In other words:
- we should remember to have consistent naming policy for X.509
certificates storage, and there should be implemented kind of 'per service'
certificate directory
- X.509 role will vary, depending on service. Sometimes only encryption,
sometimes also authentication.
3). Elliptic Curve Cryptography (ECC) support even adds more complexity, as
I could imagine a situation where we rely on RSA private keys for some
services, and we rely on ECC private keys for others.
In fact, IMO we should always create RSA and ECC private key pair per Fuel
environment, and then, in Fuel UI provide user an option which type of
cryptography (RSA or ECC) use for which service.
Or take simplified approach, and during cluster provisioning 'hard code'
which type of cryptography (which private key) use globally for all
services.

Of course scope of implementation to be discussed, the whole X.509 topic is
not easy one, and doing thing 'right' could be really time consuming.

Just my 2 cents :)


On Fri, Aug 21, 2015 at 11:10 AM, Stanislaw Bogatkin sbogat...@mirantis.com
 wrote:

 Hi folks.

 Today I want to discuss the way we save SSL keys for Fuel environments. As
 you maybe know we have 2 ways to get a key:
 a. Generate it by Fuel (self-signed certificate will be created in this
 case). In this case we will generate private key, csr and crt in a
 pre-deployment hook on master node and then copy keypair to the nodes which
 needed it.

 b. Get a pre-generated keypair from user. In this case user should create
 keypair by himself and then upload it through Fuel UI settings tab. In this
 case keypair will be saved in nailgun database and then will serialized
 into astute.yaml on cluster nodes, pulled from it by puppet and saved into
 a file.

 Second way has some flaws:
 1. We already have some keys for nodes and we store them on master node.
 Store keys in different places is bad, cause:
 1.1. User experience - user should remember that in some cases keys will
 be store in FS and in some other cases - in DB.
 1.2. It brings problems for implementation in other different places - for
 example, we need to get certificate for properly run OSTF tests and now we
 should implement two different ways to deliver that certificate to OSTF
 container. The same for fuel-cli - we should somehow get certificate from
 DB and place it in FS to use it.
 2. astute.yaml is similar for all nodes. Not all of nodes needs to have
 private key, but now we cannot control this.
 3. If keypair data serializes into astute.yaml it means than that data
 automatically will be fetched when diagnostic snapshot will created. So in
 some cases in can lead to security vulnerability, or we will must to write
 another crutch to cut it out of diagnostic snapshot.


 So I propose to get rid of saving keypair in nailgun database and
 implement a way to always saving it to local FS on master node. We need to
 implement next items:

 - Change UI logic that saving keypair into DB to logic that will save it
 to local FS
 - Implement according fixes in fuel-library

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL keys saving

2015-08-21 Thread Alexander Kurenyshev
Hi,
I absolutely agree with all points mentioned below. Especially about
security, because for now, as I know,  we have not any mechanisms to cut
out certs and keys from snapshots.
And as OSTF developer I'll be very grateful if there will be one way to get
certificates from the container.

On Fri, Aug 21, 2015 at 12:10 PM, Stanislaw Bogatkin sbogat...@mirantis.com
 wrote:

 Hi folks.

 Today I want to discuss the way we save SSL keys for Fuel environments. As
 you maybe know we have 2 ways to get a key:
 a. Generate it by Fuel (self-signed certificate will be created in this
 case). In this case we will generate private key, csr and crt in a
 pre-deployment hook on master node and then copy keypair to the nodes which
 needed it.

 b. Get a pre-generated keypair from user. In this case user should create
 keypair by himself and then upload it through Fuel UI settings tab. In this
 case keypair will be saved in nailgun database and then will serialized
 into astute.yaml on cluster nodes, pulled from it by puppet and saved into
 a file.

 Second way has some flaws:
 1. We already have some keys for nodes and we store them on master node.
 Store keys in different places is bad, cause:
 1.1. User experience - user should remember that in some cases keys will
 be store in FS and in some other cases - in DB.
 1.2. It brings problems for implementation in other different places - for
 example, we need to get certificate for properly run OSTF tests and now we
 should implement two different ways to deliver that certificate to OSTF
 container. The same for fuel-cli - we should somehow get certificate from
 DB and place it in FS to use it.
 2. astute.yaml is similar for all nodes. Not all of nodes needs to have
 private key, but now we cannot control this.
 3. If keypair data serializes into astute.yaml it means than that data
 automatically will be fetched when diagnostic snapshot will created. So in
 some cases in can lead to security vulnerability, or we will must to write
 another crutch to cut it out of diagnostic snapshot.


 So I propose to get rid of saving keypair in nailgun database and
 implement a way to always saving it to local FS on master node. We need to
 implement next items:

 - Change UI logic that saving keypair into DB to logic that will save it
 to local FS
 - Implement according fixes in fuel-library

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL keys saving

2015-08-21 Thread Evgeniy L
Hi Stanislaw,

I agree that user's certificates mustn't be saved in Nailgun database, in
cluster attributes,
in this case it can be seen in all the logs, which is terrible security
problem,
and we already have a place where we keep auto-generated certificates and
ssh-keys, and those are copied to specific nodes by Astute.

So UI should send the file to specific URL, Nginx should be configured to
send auth
request to backend, after request is authorised, Nginx should save the file
into predefined
directory, the same which we use for autogenerated certificates, in this
case such
tool as OSTF can take certificates from a single place.

Thanks,

On Fri, Aug 21, 2015 at 12:10 PM, Stanislaw Bogatkin sbogat...@mirantis.com
 wrote:

 Hi folks.

 Today I want to discuss the way we save SSL keys for Fuel environments. As
 you maybe know we have 2 ways to get a key:
 a. Generate it by Fuel (self-signed certificate will be created in this
 case). In this case we will generate private key, csr and crt in a
 pre-deployment hook on master node and then copy keypair to the nodes which
 needed it.

 b. Get a pre-generated keypair from user. In this case user should create
 keypair by himself and then upload it through Fuel UI settings tab. In this
 case keypair will be saved in nailgun database and then will serialized
 into astute.yaml on cluster nodes, pulled from it by puppet and saved into
 a file.

 Second way has some flaws:
 1. We already have some keys for nodes and we store them on master node.
 Store keys in different places is bad, cause:
 1.1. User experience - user should remember that in some cases keys will
 be store in FS and in some other cases - in DB.
 1.2. It brings problems for implementation in other different places - for
 example, we need to get certificate for properly run OSTF tests and now we
 should implement two different ways to deliver that certificate to OSTF
 container. The same for fuel-cli - we should somehow get certificate from
 DB and place it in FS to use it.
 2. astute.yaml is similar for all nodes. Not all of nodes needs to have
 private key, but now we cannot control this.
 3. If keypair data serializes into astute.yaml it means than that data
 automatically will be fetched when diagnostic snapshot will created. So in
 some cases in can lead to security vulnerability, or we will must to write
 another crutch to cut it out of diagnostic snapshot.


 So I propose to get rid of saving keypair in nailgun database and
 implement a way to always saving it to local FS on master node. We need to
 implement next items:

 - Change UI logic that saving keypair into DB to logic that will save it
 to local FS
 - Implement according fixes in fuel-library

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Virtual Sprint for Documentation

2015-08-21 Thread Davanum Srinivas
Folks,

We'd love to have you all participate in a Virtual Sprint (2 days) to
improve Oslo Documentation. Please let us know which week is good in this
doodle poll:
http://doodle.com/ykskyym3inyvy6mf

We'll need to improve documentation for both existing libraries and the
brand new libraries being worked on for Liberty. We'll be using this
etherpad, so please drop us suggestions for areas that need to be improved
or things that bothered you which can help guide us to write better
documentation to cover those areas.
https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Midcycle Sprint Summary

2015-08-21 Thread hao wang
Thanks Mike for summary.  About the Getting rid of API extensions,  I'd
like help to move the extensions over to the core. Do we should begin this
work at Mitaka?

2015-08-17 23:53 GMT+08:00 Mike Perez thin...@gmail.com:

 A *summary* of the Cinder midcycle sprint, in attempt to keep your
 attention.
 Full meeting notes available [1].

 Image Caching
 =
 Glance Cinder backend store + Cinder Image Caching are so similar, it would
 just be confusing to operators. We'll document only about the Cinder Image
 Caching since the Glance backend store is limited in comparison.


 Revisit Spec Review
 ===
 The PTL in the future will be the only one to +2/A specs after sufficient
 +1's
 have been given, and notice of approval to follow in the Cinder meeting.


 When Specs and Blueprints are needed
 
 Done https://wiki.openstack.org/wiki/Cinder/how-to-contribute-new-feature


 People can guess UUID's that don't belong to them
 =
 Who cares. In past security discussions this has been a moot point.


 Update Hypervisor about extending attached volumes
 ==
 Add support to os-brick, but the Nova team has to be fine with this only
 supporting Libvirt for now.


 Microversions
 =
 We're doing it.


 Getting rid of API extensions
 =
 Move obvious things over (volume attach, type manager) to core API
 controllers.
 Deprecate existing extensions and have these use core API controller code.
 Get
 rid of the silly os- prefix in endpoints. Use Microversions to know when
 the
 API has the new extensions in core controllers.


 Third Party CI for target drivers and zone manager drivers
 ==
 Yes. This is already happening for Brocade and Cisco in Liberty!


 Cinder - Nova API communication
 =
 Agreed on how the Cinder API should be used. It requires changes and
 a Microversion bump on the Nova side. Design summit session to follow.


 Out of tree drivers
 ===
 No.


 Exposing force-detach of a volumes
 ==
 Yes, this will be in nova-manage in Liberty.


 HA and Cinder
 =
 We need cross project consensus first. There are existing issues that can
 be
 fixed without a DLM. Fix those first. Mike Perez will be leading cross
 project discussion at the summit.


 Replication V2
 ==
 John Griffith did extreme programming with the group and posted a review.
 A limited replication feature with async and manual failover should be in
 Liberty.


 ABC classes for each driver feature
 ===
 Keeping the current solution [2].


 [1] - https://etherpad.openstack.org/p/cinder-meetup-summer-2015
 [2] -
 http://lists.openstack.org/pipermail/openstack-dev/2015-June/067563.html

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Infortrend driver and Netapp driver cannot pass gate-cinder-python27 cause jenkins failed

2015-08-21 Thread liuxinguo
Infortrend driver and Netapp driver's failure about gate-cinder-python27 
caused Jenkins -1 at patch https://review.openstack.org/#/c/201578/,
log link: 
http://logs.openstack.org/78/201578/22/check/gate-cinder-python27/2212cdc/console.html

Engineers of Infortrend driver and Netapp driver please check if there is 
something wrong, thanks!

The error message is:

2015-08-21 04:36:51.449 |
2015-08-21 04:36:51.449 | Captured stderr:
2015-08-21 04:36:51.449 | 
2015-08-21 04:36:51.449 | cinder/zonemanager/fc_zone_manager.py:85: 
DeprecationWarning: object() takes no parameters
2015-08-21 04:36:51.449 |   class_._instance = object.__new__(class_, 
*args, **kwargs)
2015-08-21 04:36:51.449 |
2015-08-21 04:36:56.613 |
2015-08-21 04:36:56.613 | ==
2015-08-21 04:36:56.613 | Failed 2 tests - output below:
2015-08-21 04:36:56.613 | ==
2015-08-21 04:36:56.613 |
2015-08-21 04:36:56.613 | 
cinder.tests.unit.test_infortrend_common.InfortrendFCCommonTestCase.test_initialize_connection
2015-08-21 04:36:56.613 | 
--
2015-08-21 04:36:56.613 |
2015-08-21 04:36:56.613 | Captured traceback:
2015-08-21 04:36:56.613 | ~~~
2015-08-21 04:36:56.613 | Traceback (most recent call last):
2015-08-21 04:36:56.614 |   File 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1305, in patched
2015-08-21 04:36:56.614 | return func(*args, **keywargs)
2015-08-21 04:36:56.614 |   File 
cinder/tests/unit/test_infortrend_common.py, line 164, in 
test_initialize_connection
2015-08-21 04:36:56.614 | test_volume, test_connector)
2015-08-21 04:36:56.614 |   File 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_concurrency/lockutils.py,
 line 252, in inner
2015-08-21 04:36:56.614 | return f(*args, **kwargs)
2015-08-21 04:36:56.614 |   File 
cinder/volume/drivers/infortrend/eonstor_ds_cli/common_cli.py, line 1264, in 
initialize_connection
2015-08-21 04:36:56.614 | volume, connector)
2015-08-21 04:36:56.614 |   File 
cinder/volume/drivers/infortrend/eonstor_ds_cli/common_cli.py, line 1276, in 
_initialize_connection_fc
2015-08-21 04:36:56.614 | self._do_fc_connection(volume, connector)
2015-08-21 04:36:56.614 |   File 
cinder/volume/drivers/infortrend/eonstor_ds_cli/common_cli.py, line 1308, in 
_do_fc_connection
2015-08-21 04:36:56.615 | channel_id = 
wwpn_channel_info[target_wwpn.upper()]['channel']
2015-08-21 04:36:56.615 | KeyError: '2000643E8C4C5F66'
2015-08-21 04:36:56.615 |
2015-08-21 04:36:56.615 |
2015-08-21 04:36:56.615 | 
cinder.tests.unit.volume.drivers.netapp.eseries.test_library.NetAppEseriesLibraryTestCase.test_initialize_connection_fc_no_target_wwpns
2015-08-21 04:36:56.615 | 
---
2015-08-21 04:36:56.615 |
2015-08-21 04:36:56.615 | Captured traceback:
2015-08-21 04:36:56.615 | ~~~
2015-08-21 04:36:56.615 | Traceback (most recent call last):
2015-08-21 04:36:56.615 |   File 
cinder/tests/unit/volume/drivers/netapp/eseries/test_library.py, line 594, in 
test_initialize_connection_fc_no_target_wwpns
2015-08-21 04:36:56.616 | get_fake_volume(), connector)
2015-08-21 04:36:56.616 |   File 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
2015-08-21 04:36:56.616 | self.assertThat(our_callable, matcher)
2015-08-21 04:36:56.616 |   File 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
2015-08-21 04:36:56.616 | raise mismatch_error
2015-08-21 04:36:56.616 | testtools.matchers._impl.MismatchError: bound 
method NetAppESeriesLibrary.initialize_connection_fc of 
cinder.volume.drivers.netapp.eseries.library.NetAppESeriesLibrary object at 
0x7faf81dc43d0 returned {'driver_volume_type': 'fibre_channel', 'data': 
{'target_lun': 0, 'initiator_target_map': {'1090fa0d6754': 
['2000643e8c4c5f66']}, 'access_mode': 'rw', 'target_wwn': ['2000643e8c4c5f66'], 
'target_discovered': True}}


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][SR-IOV]How to assign VF to a VM?

2015-08-21 Thread ????
Hi all,


I try to configure SRIOV on OpenStack Kilo referring the information below.
http://www.qlogic.com/solutions/Documents/UsersGuide_OpenStack_SR-IOV.pdf
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking


Until creating port it works well. But after creating VM using the port created 
before, it was in the state of ERROR. Below is the port information: 
neutron port-show 620187c5-b4ac-4aca-bdeb-96205503344d
+---+--+
| Field | Value 
   |
+---+--+
| admin_state_up| True  
   |
| allowed_address_pairs |   
   |
| binding:host_id   | compute   
   |
| binding:profile   | {pci_slot: :09:11.5, physical_network: 
external, pci_vendor_info: 8086:1520} |
| binding:vif_details   | {}
   |
| binding:vif_type  | binding_failed
   |
| binding:vnic_type | direct
   |
| device_id | baab9ba5-80e8-45f7-b86a-8ac3ce8ba944  
   |
| device_owner  | compute:None  
   |
| extra_dhcp_opts   |   
   |
| fixed_ips | {subnet_id: 86849224-a0a7-4059-a6b0-689a2b35c995, 
ip_address: 10.254.4.64}   |
| id| 620187c5-b4ac-4aca-bdeb-96205503344d  
   |
| mac_address   | fa:16:3e:8a:92:9b 
   |
| name  |   
   |
| network_id| db078c2d-63f1-40c0-b6c3-b49de487362b  
   |
| security_groups   | 8e12a661-09b5-41ac-ade8-fddf6d997262  
   |
| status| DOWN  
   |
| tenant_id | 85aa4ef08044470dab1608395e5cac26  
   |
+---+--+



The logs of /var/log/neutron/server.log and /var/log/nova/nova-conductor.log 
are in attachment.



Any suggestions will be grateful.
Thanks.


Yu

neutron-server-log.txt
Description: Binary data


nova-conductor-log.txt
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross-project meeting times

2015-08-21 Thread Thierry Carrez
Anne Gentle wrote:
 
 Hi all,
 
 In last week's TC Highlights blog post [1] I asked if there is interest
 in moving the cross-project meeting. Historically it is held after the
 TC meeting, but there isn't a requirement for those timings to line up.
 I've heard from European and Eastern Standard Time contributors that
 it's a tough time to meet half the year. It's also a bit early for APAC,
 my apologies for noting this but still proposing to meet earlier.
 
 I'd like to propose a new cross-project meeting time, 1800 Tuesdays. To
 that end I've created a review with the proposed time:
 
 https://review.openstack.org/214605
 
 Please take a look, see if you think it could work, and let us know
 either on this list or the review itself.

Commented on the review... I think 1800 UTC is not significantly more
convenient for Europeans (dinner hours between 1700 and 1900 UTC)
compared to 2100 UTC. It makes it more convenient for East-of-Moscow
Russians, but we lose Australia in the process.

If we are to lose Australia anyway, I would move even earlier (say 15:00
or 16:00 UTC) and cover China - US West. That could be a good rotation
with the one at 21:00 UTC which covers Australia - West Europe.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] SSL keys saving

2015-08-21 Thread Stanislaw Bogatkin
Hi folks.

Today I want to discuss the way we save SSL keys for Fuel environments. As
you maybe know we have 2 ways to get a key:
a. Generate it by Fuel (self-signed certificate will be created in this
case). In this case we will generate private key, csr and crt in a
pre-deployment hook on master node and then copy keypair to the nodes which
needed it.

b. Get a pre-generated keypair from user. In this case user should create
keypair by himself and then upload it through Fuel UI settings tab. In this
case keypair will be saved in nailgun database and then will serialized
into astute.yaml on cluster nodes, pulled from it by puppet and saved into
a file.

Second way has some flaws:
1. We already have some keys for nodes and we store them on master node.
Store keys in different places is bad, cause:
1.1. User experience - user should remember that in some cases keys will be
store in FS and in some other cases - in DB.
1.2. It brings problems for implementation in other different places - for
example, we need to get certificate for properly run OSTF tests and now we
should implement two different ways to deliver that certificate to OSTF
container. The same for fuel-cli - we should somehow get certificate from
DB and place it in FS to use it.
2. astute.yaml is similar for all nodes. Not all of nodes needs to have
private key, but now we cannot control this.
3. If keypair data serializes into astute.yaml it means than that data
automatically will be fetched when diagnostic snapshot will created. So in
some cases in can lead to security vulnerability, or we will must to write
another crutch to cut it out of diagnostic snapshot.


So I propose to get rid of saving keypair in nailgun database and implement
a way to always saving it to local FS on master node. We need to implement
next items:

- Change UI logic that saving keypair into DB to logic that will save it to
local FS
- Implement according fixes in fuel-library
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][SR-IOV]How to assign VF to a VM?

2015-08-21 Thread Moshe Levi
The problem is the sriov mechanism drive failed to bind the port.

For the log I see that you are working with agent_required=True, but the device 
mapping is empty {u'devices': 0, u'device_mappings': {}
Please check the agent configuration file see that you have the following
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
[sriov_nic]
physical_device_mappings = physnet1:eth1
exclude_devices =

also can you send the output of “ps �Cef | grep neutron-sriov-nic-agent” 
command?



From: 于洁 [mailto:16189...@qq.com]
Sent: Friday, August 21, 2015 12:01 PM
To: openstack-operators openstack-operat...@lists.openstack.org; 
openstack-dev openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][SR-IOV]How to assign VF to a VM?

Hi all,

I try to configure SRIOV on OpenStack Kilo referring the information below.
http://www.qlogic.com/solutions/Documents/UsersGuide_OpenStack_SR-IOV.pdf
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking

Until creating port it works well. But after creating VM using the port created 
before, it was in the state of ERROR. Below is the port information:
neutron port-show 620187c5-b4ac-4aca-bdeb-96205503344d
+---+--+
| Field | Value 
   |
+---+--+
| admin_state_up| True  
   |
| allowed_address_pairs |   
   |
| binding:host_id   | compute   
   |
| binding:profile   | {pci_slot: :09:11.5, physical_network: 
external, pci_vendor_info: 8086:1520} |
| binding:vif_details   | {}
   |
| binding:vif_type  | binding_failed
   |
| binding:vnic_type | direct
   |
| device_id | baab9ba5-80e8-45f7-b86a-8ac3ce8ba944  
   |
| device_owner  | compute:None  
   |
| extra_dhcp_opts   |   
   |
| fixed_ips | {subnet_id: 86849224-a0a7-4059-a6b0-689a2b35c995, 
ip_address: 10.254.4.64}   |
| id| 620187c5-b4ac-4aca-bdeb-96205503344d  
   |
| mac_address   | fa:16:3e:8a:92:9b 
   |
| name  |   
   |
| network_id| db078c2d-63f1-40c0-b6c3-b49de487362b  
   |
| security_groups   | 8e12a661-09b5-41ac-ade8-fddf6d997262  
   |
| status| DOWN  
   |
| tenant_id | 85aa4ef08044470dab1608395e5cac26  
   |
+---+--+

The logs of /var/log/neutron/server.log and /var/log/nova/nova-conductor.log 
are in attachment.

Any suggestions will be grateful.
Thanks.

Yu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev