Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-10 Thread Matthew Booth
On 09/02/15 18:15, Jay Pipes wrote:
 On 02/09/2015 01:02 PM, Attila Fazekas wrote:
 I do not see why not to use `FOR UPDATE` even with multi-writer or
 Is the retry/swap way really solves anything here.
 snip
 Am I missed something ?
 
 Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
 that are needed to support SELECT FOR UPDATE statements across multiple
 cluster nodes.
 
 https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ

Is that the right link, Jay? I'm taking your word on the write-intent
locks not being replicated, but that link seems to say the opposite.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting

2015-02-10 Thread Peter Pouliot
Hi All,

Due to multiple conflicts today we need to postpone the Hyper-v discussion 
until next week.

Peter J. Pouliot CISSP
Microsoft Cloud+Enterprise Solutions
C:\OpenStack
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Alexander Tivelkov
Hi folks,

One of the key features that we are adding to Glance with the
introduction of Artifacts is the ability to have multiple versions of
the same object in the repository: this gives us the possibility to
query for the latest version of something, keep track on the changes
history, and build various continuous delivery solutions on top of
Artifact Repository.

We need to determine the format and rules we will use to define,
increment and compare versions of artifacts in the repository. There
are two alternatives we have to choose from, and we are seeking advice
on this choice.

First, there is Semantic Versioning specification, available at [1].
It is a very generic spec, widely used and adopted in many areas of
software development. It is quite straightforward: 3 mandatory numeric
components for version number, plus optional string labels for
pre-release versions and build metadata.

And then there is PEP-440 spec, which is a recommended approach to
identifying versions and specifying dependencies when distributing
Python. It is a pythonic way to set versions of python packages,
including PIP version strings.

Conceptually PEP-440 and Semantic Versioning are similar in purpose,
but slightly different in syntax. Notably, the count of version number
components and rules of version precedence resolution differ between
PEP-440 and SemVer. Unfortunately, the two version string formats are
not compatible, so we have to choose one or the other.

According to my initial vision, the Artifact Repository should be as
generic as possible in terms of potential adoption. The artifacts were
never supposed to be python packages only, and even the projects which
will create and use these artifacts are not mandatory limited to be
pythonic, the developers of that projects may not be python
developers! So, I'd really wanted to avoid any python-specific
notations, such as PEP-440 for artifacts.

I've put this vision into a spec [3] which also contains a proposal on
how to convert the semver-compatible version strings into the
comparable values which may be mapped to database types, so a database
table may be queried, ordered and filtered by the object version.

So, we need some feedback on this topic. Would you prefer artifacts to
be versioned with SemVer or with PEP-440 notation? Are you interested
in having some generic utility which will map versions (in either
format) to database columns? If so, which version format would you
prefer?

We are on a tight schedule here, as we want to begin landing
artifact-related code soon. So, I would appreciate your feedback
during this week: here in the ML or in the comments to [3] review.

Thanks!



[1] www.semver.org
[2] www.python.org/dev/peps/pep-0440/
[3] https://review.openstack.org/#/c/139595/

--
Regards,
Alexander Tivelkov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread Doug Hellmann


On Tue, Feb 10, 2015, at 05:19 AM, Thierry Carrez wrote:
 Joe, Matt  Matthew:
 
 I hear your frustration with broken stable branches. With my
 vulnerability management team member hat, responsible for landing
 patches there with a strict deadline, I can certainly relate with the
 frustration of having to dive in to unbork the branch in the first
 place, rather than concentrate on the work you initially planned on
 doing.
 
 That said, wearing my stable team member hat, I think it's a bit unfair
 to say that things are worse than they were and call for dramatic
 action. The stable branch team put a structure in place to try to
 continuously fix the stable branches rather than reactively fix it when
 we need it to work. Those champions have been quite active[1] unbreaking
 it in the past months. I'd argue that the branch is broken much less
 often than it used to. That doesn't mean it's never broken, though, or
 that those people are magicians.
 
 One issue in the current situation is that the two groups (you and the
 stable maintainers) seem to work in parallel rather than collaborate.
 It's quite telling that the two groups maintained separate etherpads to
 keep track of the fixes that needed landing.

I agree we should figure out a way to communicate better between the
various teams involved in fixing the issues. Consolidating some of the
communication channels we use should help. For example, if we settle on
a single channel to use on IRC then we can log that channel so anyone
can review the history of a given issue, either to catch up on what
happened overnight or to understand how an issue was resolved long after
the fact. At one point we created #openstack-gate for this, but I don't
see many people there now and it's not being logged. Maybe we should
just use #openstack-dev, since that isn't as active for other purposes
now?

 
 [1] https://etherpad.openstack.org/p/stable-tracker
 
 Matthew Treinish wrote:
  So I think it's time we called the icehouse branch and marked it EOL. We
  originally conditioned the longer support window on extra people stepping
  forward to keep things working. I believe this latest issue is just the 
  latest
  indication that this hasn't happened. Issue 1 listed above is being caused 
  by
  the icehouse branch during upgrades. The fact that a stable release was 
  pushed
  at the same time things were wedged on the juno branch is just the latest
  evidence to me that things aren't being maintained as they should be. 
  Looking at
  the #openstack-qa irc log from today or the etherpad about trying to sort 
  this
  issue should be an indication that no one has stepped up to help with the
  maintenance and it shows given the poor state of the branch.
 
 I disagree with the assessment. People have stepped up. I think the
 stable branches are less often broken than they were, and stable branch
 champions (as their tracking etherpad shows) have made a difference.

They seem to be pretty consistently broken, at least lately, but the
causes have also been consistent. This cycle we changed the way we
manage libraries (dropping alpha release versions) and test tools
(branchless tempest) without fully understanding the effects those
changes would have throughout the complex testing systems we have in
place. As the changes have rippled through the system, we've found more
and more unintended consequences, some of which have required making
other big changes to the way test jobs are defined and to how we manage
requirements. I'm not convinced we would have identified all of those
issues even if we had spent more time reasoning through the changes
before making them, given the complexity. So, although it has been a
frustrating period, in retrospect I think there were probably a few
things we could have done differently early on but it was largely
necessary to do it this way to uncover the issues we couldn't predict.

I think we're approaching a good state, too. The series of patches Joe,
Matt, and Sean came up with yesterday should unwedge icehouse. Then we
can resume the work to cap the requirements lists, and that will avoid
breaking backwards compatibility test jobs with new releases of
libraries. We've also stopped testing master branches of libraries
against the stable branches where we won't be running the code, so
development on the master branches of those libraries should already be
unblocked (although we're not releasing anything until this is sorted
out, which is another sort of block).

 There just has been more issues as usual recently and they probably
 couldn't keep track. It's not a fun job to babysit stable branches,
 belittling the stable branch champions results is not the best way to
 encourage them to continue in this position. I agree that they could
 work more with the QA team when they get overwhelmed, and raise more red
 flags when they just can't keep up.
 
 I also disagree with the proposed solution. We announced a support
 timeframe for Icehouse, our 

[openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Kevin Bringard (kevinbri)
Since this is sort of a topic change, I opted to start a new thread. I was 
reading over the Juno is Fubar at the Gate thread, and this bit stood out to 
me:

  So I think it's time we called the icehouse branch and marked it EOL. We
  originally conditioned the longer support window on extra people stepping
  forward to keep things working. I believe this latest issue is just the 
  latest
  indication that this hasn't happened. Issue 1 listed above is being caused 
  by
  the icehouse branch during upgrades. The fact that a stable release was 
  pushed
  at the same time things were wedged on the juno branch is just the latest
  evidence to me that things aren't being maintained as they should be. 
  Looking at
  the #openstack-qa irc log from today or the etherpad about trying to sort 
  this
  issue should be an indication that no one has stepped up to help with the
  maintenance and it shows given the poor state of the branch.

Most specifically: 

We originally conditioned the longer support window on extra people stepping 
forward to keep things working ... should be an indication that no one has 
stepped up to help with the maintenance and it shows given the poor state of 
the branch.

I've been talking with a few people about this very thing lately, and I think 
much of it is caused by what appears to be our actively discouraging people 
from working on it. Most notably, ATC is only being given to folks committing 
to the current branch 
(https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).
 Secondly, it's difficult to get stack-analytics credit for back ports, as the 
preferred method is to cherry pick the code, and that keeps the original 
author's name. I've personally gotten a few commits into stable, but have 
nothing to show for it in stack-analytics (if I'm doing it wrong, I'm happy to 
be corrected).

My point here isn't to complain that I, or others, are not getting credit, but 
to point out that I don't know what we expected to happen to stable branches 
when we actively dis-incentivize people from working on them. Working on 
hardening old code is generally far less interesting than working on the cool 
shiny new features, and many of the productionalization issues we run into 
aren't uncovered until it's being run at scale which in turn is usually by a 
big company who likely isn't chasing trunk.

My fear is that we're going in a direction where trunk is the sole focus and 
we're subsequently going to lose the support of the majority of the operators 
and enterprises at which point we'll be a fun research project, but little more.

-- Kevin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Monty Taylor
On 02/10/2015 10:28 AM, Alexander Tivelkov wrote:
 Hi folks,
 
 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.
 
 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking advice
 on this choice.
 
 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.
 
 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.
 
 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.
 
 According to my initial vision, the Artifact Repository should be as
 generic as possible in terms of potential adoption. The artifacts were
 never supposed to be python packages only, and even the projects which
 will create and use these artifacts are not mandatory limited to be
 pythonic, the developers of that projects may not be python
 developers! So, I'd really wanted to avoid any python-specific
 notations, such as PEP-440 for artifacts.
 
 I've put this vision into a spec [3] which also contains a proposal on
 how to convert the semver-compatible version strings into the
 comparable values which may be mapped to database types, so a database
 table may be queried, ordered and filtered by the object version.
 
 So, we need some feedback on this topic. Would you prefer artifacts to
 be versioned with SemVer or with PEP-440 notation? Are you interested
 in having some generic utility which will map versions (in either
 format) to database columns? If so, which version format would you
 prefer?
 
 We are on a tight schedule here, as we want to begin landing
 artifact-related code soon. So, I would appreciate your feedback
 during this week: here in the ML or in the comments to [3] review.

Because you should have more things to look at:

http://docs.openstack.org/developer/pbr/semver.html

We've already done some work to try to reconcile the world of semver
with the world of PEP440 over in PBR land.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A question about strange behavior of oslo.config in eclipse

2015-02-10 Thread Doug Hellmann


On Tue, Feb 10, 2015, at 04:29 AM, Joshua Zhang wrote:
 Hi Stacker,
A question about oslo.config, maybe a very silly question. but pls
tell
 me if you know, thanks in advance.
 
I know oslo has removed 'olso' namespace, oslo.config has been changed
 to oslo_config, it also retains backwards compat.
 
I found I can run openstack successfully, but as long as I run
something
 in eclipse/pydev it always said like 'NoSuchOptError: no such option:
 policy_file'. I can change 'oslo.config' to 'oslo_config' in
 neutron/openstack/common/policy.py temporarily to bypass this problem
 when
 I want to debug something in eclipse. But I want to know why? who can
 help
 me to explain ? thanks.

It sounds like you have code in one module using an option defined
somewhere else and relying on import ordering to cause that option to be
defined. The import_opt() method of the ConfigOpts class is meant to
help make these cross-module option dependencies explicit [1]. If you
provide a more detailed traceback I may be able to give more specific
advice about where changes are needed.

Doug

[1]
http://docs.openstack.org/developer/oslo.config/configopts.html?highlight=import_opt#oslo_config.cfg.ConfigOpts.import_opt

 
 
 -- 
 Best Regards
 Zhang Hua(张华)
 Software Engineer | Canonical
 IRC:  zhhuabj
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cache for packages on master node

2015-02-10 Thread Simon Pasquier
Hello Tomasz,
In a previous life, I used squid to speed up packages downloads and it
worked just fine...
Simon

On Tue, Feb 10, 2015 at 3:24 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:

 Hi,

 We are currently redesigning our apporach to upstream distributions and
 obviusly we will need some cache system for packages on master node. It
 should work for deb and rpm packages, and be able to serve up to 200 nodes.
 I know we had bad experience in the past, can you guys share your thought
 on that?
 I just collected what was mentioned in other discussions:
 - approx
 - squid
 - apt-cacher-ng
 - ?

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-10 Thread Doug Hellmann


On Tue, Feb 10, 2015, at 07:25 AM, Sean Dague wrote:
 On 02/09/2015 10:55 PM, Michael Still wrote:
  The previous policy is that we do a release when requested or when a
  critical bug fix merges. I don't see any critical fixes awaiting
  release, but I am not opposed to a release.
  
  The reason I didn't do this yesterday is that Joe wanted some time to
  pin the stable requirements, which I believe he is still working on.
  Let's give him some time unless this is urgent.
 
 Going forward I'd suggest that we set a goal to do a monthly nova-client
 release to get fixes out into the wild in a more regular cadence. Would
 be nice to not have this just land as a big bang release at the end of a
 cycle.

We review the changes in Oslo libraries weekly. Is there any reason not
to do the same with client libs? Given the automation in place for
creating releases, I think the whole process (including release notes)
is down to just a few minutes now. The tagging script is in the
openstack-infra/release-tools repository and I'd be happy to put the
release notes script there, too, if others want to use it.

Doug

 
   -Sean
 
  
  Michael
  
  On Tue, Feb 10, 2015 at 2:45 PM, melanie witt melwi...@gmail.com wrote:
  On Feb 6, 2015, at 8:17, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
  We haven't done a release of python-novaclient in awhile (2.20.0 was 
  released on 2014-9-20 before the Juno release).
 
  It looks like there are some important feature adds and bug fixes on 
  master so we should do a release, specifically to pick up the change for 
  keystone v3 support [1].
 
  So can this be done now or should this wait until closer to the Kilo 
  release (library releases are cheap so I don't see why we'd wait).
 
  Thanks for bringing this up -- there are indeed a lot of important 
  features and fixes on master.
 
  I agree we should do a release as soon as possible, and I don't think 
  there's any reason to wait until closer to Kilo.
 
  melanie (melwitt)
 
 
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  
  
  
 
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread Matthew Treinish
On Tue, Feb 10, 2015 at 11:19:20AM +0100, Thierry Carrez wrote:
 Joe, Matt  Matthew:
 
 I hear your frustration with broken stable branches. With my
 vulnerability management team member hat, responsible for landing
 patches there with a strict deadline, I can certainly relate with the
 frustration of having to dive in to unbork the branch in the first
 place, rather than concentrate on the work you initially planned on doing.
 
 That said, wearing my stable team member hat, I think it's a bit unfair
 to say that things are worse than they were and call for dramatic
 action. The stable branch team put a structure in place to try to
 continuously fix the stable branches rather than reactively fix it when
 we need it to work. Those champions have been quite active[1] unbreaking
 it in the past months. I'd argue that the branch is broken much less
 often than it used to. That doesn't mean it's never broken, though, or
 that those people are magicians.

I don't at all for 2 reasons. The first being in every discussion we had at 2
summits I raised the increased maint. burden for a longer support window and
was told that people were going to stand up so it wouldn't be an issue. I have
yet to see that happen. I have not seen anything to date that would convince
me that we are at all ready to be maintaining 3 stable branches at once.

The second is while I've seen that etherpad, I still view their still being a
huge disconnect here about what actually maintaining the branches requires. The
issue which I'm raising is about issues related to the gating infrastructure and
how to ensure that things stay working. There is a non-linear overhead involved
with making sure any gating job stays working. (on stable or master) People need
to take ownership of jobs to make sure they keep working.

 
 One issue in the current situation is that the two groups (you and the
 stable maintainers) seem to work in parallel rather than collaborate.
 It's quite telling that the two groups maintained separate etherpads to
 keep track of the fixes that needed landing.

I don't actually view it as that. Just looking at the etherpad it has a very
small subset of the actual types of issues we're raising here. 

For example, there was a week in late Nov. when 2 consecutive oslo project
releases broke the stable gates. After we unwound all of this and landed the
fixes in the branches the next step was to changes to make sure we didn't allow
breakages in the same way:

http://lists.openstack.org/pipermail/openstack-dev/2014-November/051206.html

This was also happened at the same time as a new testtools stack release which
broke every branch (including master). Another example is all of the setuptools
stack churn from the famed Christmas releases. That was another critical
infrastructure piece that fell apart and was mostly handled by the infra team.
All of these things are getting fixed because they have to be, to make sure
development on master can continue not because those with a vested interest in
the stable branches working for 15 months are working on them.

The other aspect here are development efforts to make things more stable in this
space. Things like the effort to pin the requirements on stable branches which
Joe is spearheading. These are critical to the long term success of the stable
branches yet no one has stepped up to help with it.

I view this as a disconnect between what people think maintaining a stable
branch means and what it actually entails. Sure, the backporting of fixes to
intermittent failures is part of it. But, the most effort is spent on making
sure the gating machinery stays well oiled and doesn't breakdown.

 
 [1] https://etherpad.openstack.org/p/stable-tracker
 
 Matthew Treinish wrote:
  So I think it's time we called the icehouse branch and marked it EOL. We
  originally conditioned the longer support window on extra people stepping
  forward to keep things working. I believe this latest issue is just the 
  latest
  indication that this hasn't happened. Issue 1 listed above is being caused 
  by
  the icehouse branch during upgrades. The fact that a stable release was 
  pushed
  at the same time things were wedged on the juno branch is just the latest
  evidence to me that things aren't being maintained as they should be. 
  Looking at
  the #openstack-qa irc log from today or the etherpad about trying to sort 
  this
  issue should be an indication that no one has stepped up to help with the
  maintenance and it shows given the poor state of the branch.
 
 I disagree with the assessment. People have stepped up. I think the
 stable branches are less often broken than they were, and stable branch
 champions (as their tracking etherpad shows) have made a difference.
 There just has been more issues as usual recently and they probably
 couldn't keep track. It's not a fun job to babysit stable branches,
 belittling the stable branch champions results is not the best way to
 encourage them to continue in this 

Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2015-02-10 Thread Feodor Tersin
I definitely don't expect any change of the existing port in the case with
two nics. However in the case of single nic a question like 'what is impact
of security-groups parameter' arises.
Also a similar question arises out of '--nic port-id=xxx,v4-fixed-ip=yyy'
combination.
Moreover, if we assume that, for example, security-groups parameter affects
the specified port, the next question is 'what is the result group set'.
Does it replace groups of the port, or just update them?

Thus i agree with you, that this part of nova API is not clear now. But the
case with two nics make sense, works now, and can be used by someone. Do
you really want to break it?


On Tue, Feb 10, 2015 at 10:29 AM, Oleg Bondarev obonda...@mirantis.com
wrote:



 On Mon, Feb 9, 2015 at 8:50 PM, Feodor Tersin fter...@cloudscaling.com
 wrote:

 nova boot ... --nic port-id=xxx --nic net-id=yyy
 this case is valid, right?
 I.e. i want to boot instance with two ports. The first port is specified,
 but the second one is created at network mapping stage.
 If i specify a security group as well, it will be used for the second
 port (if not - default group will):
 nova boot ... --nic port-id=xxx --nic net-id=yyy --security-groups sg-1
 Thus a port and a security group can be specified together.


 The question here is what do you expect for the existing port - it's
 security groups updated or not?
 Will it be ok to silently (or with warning in logs) ignore security groups
 for it?
 If it's ok then is it ok to do the same for:
 nova boot ... --nic port-id=xxx --security-groups sg-1
 where the intention is clear enough?



 On Mon, Feb 9, 2015 at 7:14 PM, Matt Riedemann 
 mrie...@linux.vnet.ibm.com wrote:



 On 9/26/2014 3:19 AM, Christopher Yeoh wrote:

 On Fri, 26 Sep 2014 11:25:49 +0400
 Oleg Bondarev obonda...@mirantis.com wrote:

  On Fri, Sep 26, 2014 at 3:30 AM, Day, Phil philip@hp.com wrote:

I think the expectation is that if a user is already interaction
 with Neutron to create ports then they should do the security group
 assignment in Neutron as well.


 Agree. However what do you think a user expects when he/she boots a
 vm (no matter providing port_id or just net_id)
 and specifies security_groups? I think the expectation should be that
 instance will become a member of the specified groups.
 Ignoring security_groups parameter in case port is provided (as it is
 now) seems completely unfair to me.


 One option would be to return a 400 if both port id and security_groups
 is supplied.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Coming back to this, we now have a change from Oleg [1] after an initial
 attempt that was reverted because it would break server creates if you
 specified a port (because the original change would blow up when the
 compute API added the 'default' security group to the request').

 The new change doesn't add the 'default' security group to the request
 so if you specify a security group and port on the request, you'll now get
 a 400 error response.

 Does this break API compatibility?  It seems this falls under the first
 bullet here [2], A change such that a request which was successful before
 now results in an error response (unless the success reported previously
 was hiding an existing error condition).  Does that caveat in parenthesis
 make this OK?

 It seems like we've had a lot of talk about warts in the compute v2 API
 for cases where an operation is successful but didn't yield the expected
 result, but we can't change them because of API backwards compatibility
 concerns so I'm hesitant on this.

 We also definitely need a Tempest test here, which I'm looking into.  I
 think I can work this into the test_network_basic_ops scenario test.

 [1] https://review.openstack.org/#/c/154068/
 [2] https://wiki.openstack.org/wiki/APIChangeGuidelines#
 Generally_Not_Acceptable

 --

 Thanks,

 Matt Riedemann


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing 

[openstack-dev] [Fuel] Cache for packages on master node

2015-02-10 Thread Tomasz Napierala
Hi,

We are currently redesigning our apporach to upstream distributions and 
obviusly we will need some cache system for packages on master node. It should 
work for deb and rpm packages, and be able to serve up to 200 nodes.
I know we had bad experience in the past, can you guys share your thought on 
that?
I just collected what was mentioned in other discussions:
- approx
- squid
- apt-cacher-ng
- ?

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cache for packages on master node

2015-02-10 Thread Skamruk, Piotr
On Tue, 2015-02-10 at 15:24 +0100, Tomasz Napierala wrote:


Hi,

We are currently redesigning our apporach to upstream distributions and 
obviusly we will need some cache system for packages on master node. It should 
work for deb and rpm packages, and be able to serve up to 200 nodes.
I know we had bad experience in the past, can you guys share your thought on 
that?
I just collected what was mentioned in other discussions:
- approx
- squid
- apt-cacher-ng
- ?


As this should work for both .rpm/.deb packages, i think that squid (probably 
configured as transparent proxy, but not necessarily, we can explicitly set FMN 
as http/https proxy on deployed nodes) could be easiest to setup.

http://codepoets.co.uk/2014/squid-3-4-x-with-ssl-for-debian-wheezy/ - example 
how to setup squid as transparent proxy also for https .

--
  regards
  jell
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila virtual midcycle meetup

2015-02-10 Thread Ben Swartzlander

On 01/25/2015 06:21 PM, Ben Swartzlander wrote:
Given the international nature of this team and the difficulty with 
travel, I'm thinking that 2 morning and 1 afternoon session will 
be the best chance for everyone to get together. The meetup will be 
primarily virtual, but since several of us (including me) are in RTP 
North Carolina we will be happy to host anyone that wants to travel 
and join us locally at the NetApp RTP campus.


After talking to other core members, I'd like to propose that we hold 
the meetup at the following times:


Wednesday 11 Feb: 1300-1700 UTC
Wednesday 11 Feb: 1800-2200 UTC
Thursday 12 Feb: 1300-1700 UTC

Do these time work for everyone who wants to join? Is there enough 
time to arrange travel for those of you who would like to participate 
locally? Please indicate your availability on the etherpad we started 
last Thursday:


https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup


This is a reminder that the meetup is tomorrow! It will be entirely 
virtual, so please join the Google Hangout or the phone bridge. The 
details are in the etherpad.


Since this is the first time we've tried something like this, I'm not 
sure how the format will work out. I didn't want to schedule static 
blocks of time for topics because I want to let discussions run their 
course. I know this creates a big challenge for people who aren't able 
to attend both the early and late sessions, so we will do our best to 
take notes and duplicate any presentations or discussions for those who 
missed something.


The topics on the agenda are ordered by most popular to least popular 
and I plan to go through them mostly in order.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar] currectly status

2015-02-10 Thread Jin, Yuntong
Hello,
May I ask the currently status of blazar project, it's been very quiet there 
for past half year, part of reason could be related to Gantt project?
The way I see this project is very usefully for NFV use case, and I really like 
to know the status of it, maybe also Gantt project.
Thanks


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] two level share scheduling

2015-02-10 Thread Ben Swartzlander


On 02/10/2015 06:14 AM, Valeriy Ponomaryov wrote:

Hello Jason,

On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop jason.bis...@gmail.com 
mailto:jason.bis...@gmail.com wrote:


When a share is created (from scratch), the manila scheduler
identifies a share server from its list of backends and makes an
api call to create_share method in the appropriate driver.  The
driver executes the required steps and returns the export_location
which is then written to the database.

It is not correct description of current approach. Scheduler handles 
only capabilities and extra specs, and there is no logic for filtering 
based on share servers for the moment.

Correct would be following:
Scheduler (manila-scheduler) chooses host, then sends request create 
share to chosen manila-share service, which handles all stuff related 
to share servers based on share driver logic.


This is something I'd like to change. The scheduler should know about 
where the existing (usable) share servers are, and should be able to 
prefer a backend with an existing share server over a backend with no 
existing share server for share types that would require share servers. 
The administrator should control how strongly to weigh this information 
within the scheduler.


On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop jason.bis...@gmail.com 
mailto:jason.bis...@gmail.com wrote:


Proposed scheme:

The proposal is simple in concept.  Instead of the driver
(GenericShareDriver for example) returning both share server ip
address and path in share export_location, only the path is
returned and saved in the database.  The binding of the share
server ip address is only determined at share mount time.  In
practical terms this means share server is determined by an api
call to the driver when _get_shares is called.  The driver would
then have the option of determining which IP address from its
basket of addresses to return.  In this way, each share mount
event presents an opportunity for the NFS traffic to be balanced
over all available network endpoints.

It is specific only to GenericShareDriver and mentioned public IP 
address is used once for combining export_location from path and this 
IP. Other share drivers do not store it and not forced to do it at 
all. For example, in case of share driver for NetApp Clustered Data 
OnTap stored only one specific information, it is name of vserver. IP 
address is taken each time via API of backend.


It is true, that now we have possibility to store only one export 
location. I agree, that it would be suitable to have more than one 
export_location. So, idea of having multiple export_locations is good.


We absolutely need multiple export locations. But I want that feature 
for other reasons than what Jason mentions. Typically, load balancing 
can be achieved by in-band techniques such as pNFS which only needs one 
export location to get started. The main value of the multiple export 
locations for me is to cover cases when a client wants to perform a 
mount during a failover event when one or more export locations are 
temporarily unreachable.





On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop jason.bis...@gmail.com 
mailto:jason.bis...@gmail.com wrote:


I see following cons:

   o slow manila list performance

   o very slow manila list performance if all share drivers are
busy doing long operations such as create/delete share

First of all, manila-api service does know nothing about share drivers 
or backends, that are meanings of different service/process - 
manila-share, manila-api uses DB for getting information.
So, you just can not ask share drivers with list API call. API 
either reads DB and returns something or sends some RPC and returns 
some data and does not expect result of RPC.
If you want to change IP addresses, then you need to update DB with 
it. Hence, it looks like requirement to have periodic task, that 
will do it continuously.


Yes. Changing IP addresses would be a big problem because right now 
manila doesn't provide a way for the driver to alter the export location 
after the share is created.


I prefer to have more than one export locations and allow users to 
chose any of them. Also I assume possibility when IP addresses just 
changed, in this case we should allow to update export locations.


And second, if we implement multiple export locations for shares, 
better to not return it within list API response and do it only 
within get requests.


Agreed.


Valeriy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Filtering by metadata values

2015-02-10 Thread Miguel Grinberg
Hi Angus,

Yeah, I missed the new Nova tags, they do seem very similar to Heat’s tags. 

I should have been more clear when we discussed this on IRC regarding the 
API-WG position on filtering. I was aware of the lack of guidelines on 
filtering, and being an active member of the API-WG team I consider this a good 
opportunity to add more content on this topic.

The closest thing that we have in API-WG is a sorting guideline, which gathered 
very positive reviews. Some aspects of it, such as the structure of the URL 
arguments, can be adapted to filtering, so maybe I should take all this 
information and try to come up with generic filtering guidelines that Heat can 
use and API-WG can publish.

Thanks!

Miguel



From: Angus Salkeld asalkeld@... 
http://gmane.org/get-address.php?address=asalkeld%2dnYU0QVwCCFFWk0Htik3J%2fw%40public.gmane.org
Subject: Re: Filtering by metadata values 
http://news.gmane.org/find-root.php?message_id=CAA16xczDTXGZp%5feEbdWGkjxtrXy2Et6t3gPo%3d8h308eYPFznOg%40mail.gmail.com
Newsgroups: gmane.comp.cloud.openstack.devel 
http://news.gmane.org/gmane.comp.cloud.openstack.devel
Date: 2015-02-11 03:03:26 GMT (1 hour and 19 minutes ago)

On Wed, Feb 11, 2015 at 8:20 AM, Miguel Grinberg miguel.grinberg at 
gmail.com mailto:miguel.grinberg-re5jqeeqqe8avxtiumw...@public.gmane.org 
wrote:
Hi,

We had a discussion yesterday on the Heat channel regarding patterns for 
searching or filtering entities by its metadata values. This is in relation to 
a feature that is currently being implemented in Heat called “Stack Tags”.

The idea is that Heat stack lists can be filtered by these tags, so for 
example, any stacks that you don’t want to see you can tag as “hidden”, then 
when you request a stack list you can specify that you only want stacks that do 
not have the “hidden” tag.


Some background, the author initially just asked for a field hidden. But it 
seemed like there were many more use cases that could be fulfilled by having
a generic tags on the stack REST resource. This is really nice feature from 
UI perspective.
 
We were trying to find other similar usages of tags and/or metadata within 
OpenStack projects, where these are not only stored as data, but are also used 
in database queries for filtering. A quick search revealed nothing, which is 
surprising.

I have spotted nova's instance tags that look like the kinda beast we are after:
  -  https://blueprints.launchpad.net/nova/+spec/tag-instances 
https://blueprints.launchpad.net/nova/+spec/tag-instances
  -  https://review.openstack.org/#/q/topic:bp/tag-instances,n,z 
https://review.openstack.org/#/q/topic:bp/tag-instances,n,z

 

Is there anything I may have missed? I would like to know if there anything 
even remotely similar, so that we don’t build a new wheel if one already exists 
for this.


So we wanted to bring this up as there is a API WG and the concept of tags and 
filtering should be consistent
and we don't want to run off and do something that the WG really doesn't like.

But it looks like this needs a bit more fleshing out:
 
http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering
 
http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering

Should we just follow nova's instance tags, given the lack of definition in 
api-wg?

Regards
Angus

Thanks,

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhA@public.gmane.orgorg?subject:unsubscribe
 
http://openstack-dev-request-zwoeplungu0gqvykttqaheb+6bgkl...@public.gmane.org/?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@... 
http://gmane.org/get-address.php?address=OpenStack%2ddev%2drequest%2dZwoEplunGu0gQVYkTtqAhEB%2b6BGkLq7r%40public.gmane.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Lin Hua Cheng
+1 Well deserved!

Lin

On Tue, Feb 10, 2015 at 11:47 AM, Priti Desai priti_de...@symantec.com
wrote:

 +1

 Cheers
 Priti

 From: Brad Topol bto...@us.ibm.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, February 10, 2015 at 11:04 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Keystone] Proposing Marek Denis for the
 Keystone Core Team

 +1!  Marek has been an outstanding Keystone contributor and reviewer!

 --Brad


 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet:  bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680



 From:David Stanek dsta...@dstanek.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date:02/10/2015 12:58 PM
 Subject:Re: [openstack-dev] [Keystone] Proposing Marek Denis for
 the Keystone Core Team
 --



 +1

 On Tue, Feb 10, 2015 at 12:51 PM, Morgan Fainberg 
 *morgan.fainb...@gmail.com* morgan.fainb...@gmail.com wrote:
 Hi everyone!

 I wanted to propose Marek Denis (marekd on IRC) as a new member of the
 Keystone Core team. Marek has been instrumental in the implementation of
 Federated Identity. His work on Keystone and first hand knowledge of the
 issues with extremely large OpenStack deployments has been a significant
 asset to the development team. Not only is Marek a strong developer working
 on key features being introduced to Keystone but has continued to set a
 high bar for any code being introduced / proposed against Keystone. I know
 that the entire team really values Marek’s opinion on what is going in to
 Keystone.

 Please respond with a +1 or -1 for adding Marek to the Keystone core team.
 This poll will remain open until Feb 13.

 --
 Morgan Fainberg

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 David
 blog: *http://www.traceback.org* http://www.traceback.org/
 twitter: *http://twitter.com/dstanek* http://twitter.com/dstanek
 www: *http://dstanek.com* http://dstanek.com/
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][fwaas] No IRC meeting on Feb 11th eom

2015-02-10 Thread Sumit Naiksatam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Morgan Fainberg
Hi everyone!

I wanted to propose Marek Denis (marekd on IRC) as a new member of the Keystone 
Core team. Marek has been instrumental in the implementation of Federated 
Identity. His work on Keystone and first hand knowledge of the issues with 
extremely large OpenStack deployments has been a significant asset to the 
development team. Not only is Marek a strong developer working on key features 
being introduced to Keystone but has continued to set a high bar for any code 
being introduced / proposed against Keystone. I know that the entire team 
really values Marek’s opinion on what is going in to Keystone.

Please respond with a +1 or -1 for adding Marek to the Keystone core team. This 
poll will remain open until Feb 13.

-- 
Morgan Fainberg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Alexander Tivelkov
Hi Ian,

Automatic version generation is not the only and not the primary
reason for the version concept. In fact, the implementation which is
planned to land in this cycle does not contain this feature at all:
currently we also leave the version assignment up to uploader (version
is a regular immutable generic artifact property). Auto-increments as
part of clone-and-modify scenarios are postponed for the next cycle.

However, even now we do need to have some sorting order - so, we need
rules to determine precedence. That's the reason for having some
notation defined: if we leave the notation up to the end-user we won't
be able to compare artifacts having versions in different notations.
And we even can't leave it up to the Artifact Type developer, since
this is a generic property, thus common for all the artifact types.
And of course, the chosen solution should be mappable to database, so
we may do sorting and filtering on the DB-side.
So, having it as a simple string and letting the user to decide what
it means is not an option.

Speaking about Artifactory - that's entirely different thing. It is
indeed a continuous delivery solution, composed around build machines,
deployment solutions and CI systems. That's definitely not what Glance
Artifact Repository is. Even the concepts of Artifact are entirely
different.  So, while Artifact Repository may be used to build some CD
solutions on top of it (or to be integrated with the existing ones) it
is not a storage solution for build outputs and thus I can barely see
how we may compare them.

--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 8:15 PM, Ian Cordasco
ian.corda...@rackspace.com wrote:


 On 2/10/15, 10:35, Alexander Tivelkov ativel...@mirantis.com wrote:

Thanks Monty!

Yup, probably I've missed that. I was looking at pbr and its version
implementation, but didn't realize that this is actually a fusion of
semver and pep440.

So, we have this as an extra alternative to choose from.

It would be an obvious choice if we were just looking for some common
solution to version objects within openstack. However, I am a bit
concerned about applying it to Artifact Repository. As I wrote before,
we are trying to make the Repository to be language- and
platform-agnostic tool for other developers, including the ones
originating from non-python and non-openstack worlds. Having a
versioning notation which is non-standard for everybody but openstack
developers does not look like a good idea to me.
--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 6:55 PM, Monty Taylor mord...@inaugust.com
wrote:
 On 02/10/2015 10:28 AM, Alexander Tivelkov wrote:
 Hi folks,

 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.

 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking advice
 on this choice.

 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.

 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.

 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.

 According to my initial vision, the Artifact Repository should be as
 generic as possible in terms of potential adoption. The artifacts were
 never supposed to be python packages only, and even the projects which
 will create and use these artifacts are not mandatory limited to be
 pythonic, the developers of that projects may not be python
 developers! So, I'd really wanted to avoid any python-specific
 notations, such as PEP-440 for artifacts.

 I've put this vision into a spec [3] which also contains a proposal on
 how to convert the semver-compatible version strings into the
 comparable values which may be mapped to database types, so a database
 table may be queried, ordered and filtered by the object version.

 So, we need some feedback on this topic. Would you prefer artifacts to
 be versioned with SemVer or with PEP-440 notation? Are 

Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread James E. Blair
Thierry Carrez thie...@openstack.org writes:

 I also disagree with the proposed solution. We announced a support
 timeframe for Icehouse, our downstream users made plans around it, so we
 should stick to it as much as we can.

To be fair, if we did that, we did not communicate accurately the
sentiment of the room at the Kilo summit.  From:

  https://etherpad.openstack.org/p/kilo-summit-ops-stable-branch

  Stable branches starting with stable/icehouse are intended to be
  supported for 15 months (previously it was 9 months), but it depends
  on the community dedicating resources to maintain stable/*

There was definitely skepticism in that room.  I would characterize it
as something like some people wanted 15 months and other people said
that is unlikely to happen based on our track record.  I think the
consensus was akin to okay, we'll try it and see what happens but no
promises.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread David Kranz

On 02/10/2015 10:35 AM, Matthew Treinish wrote:

On Tue, Feb 10, 2015 at 11:19:20AM +0100, Thierry Carrez wrote:

Joe, Matt  Matthew:

I hear your frustration with broken stable branches. With my
vulnerability management team member hat, responsible for landing
patches there with a strict deadline, I can certainly relate with the
frustration of having to dive in to unbork the branch in the first
place, rather than concentrate on the work you initially planned on doing.

That said, wearing my stable team member hat, I think it's a bit unfair
to say that things are worse than they were and call for dramatic
action. The stable branch team put a structure in place to try to
continuously fix the stable branches rather than reactively fix it when
we need it to work. Those champions have been quite active[1] unbreaking
it in the past months. I'd argue that the branch is broken much less
often than it used to. That doesn't mean it's never broken, though, or
that those people are magicians.

I don't at all for 2 reasons. The first being in every discussion we had at 2
summits I raised the increased maint. burden for a longer support window and
was told that people were going to stand up so it wouldn't be an issue. I have
yet to see that happen. I have not seen anything to date that would convince
me that we are at all ready to be maintaining 3 stable branches at once.

The second is while I've seen that etherpad, I still view their still being a
huge disconnect here about what actually maintaining the branches requires. The
issue which I'm raising is about issues related to the gating infrastructure and
how to ensure that things stay working. There is a non-linear overhead involved
with making sure any gating job stays working. (on stable or master) People need
to take ownership of jobs to make sure they keep working.


One issue in the current situation is that the two groups (you and the
stable maintainers) seem to work in parallel rather than collaborate.
It's quite telling that the two groups maintained separate etherpads to
keep track of the fixes that needed landing.

I don't actually view it as that. Just looking at the etherpad it has a very
small subset of the actual types of issues we're raising here.

For example, there was a week in late Nov. when 2 consecutive oslo project
releases broke the stable gates. After we unwound all of this and landed the
fixes in the branches the next step was to changes to make sure we didn't allow
breakages in the same way:

http://lists.openstack.org/pipermail/openstack-dev/2014-November/051206.html

This was also happened at the same time as a new testtools stack release which
broke every branch (including master). Another example is all of the setuptools
stack churn from the famed Christmas releases. That was another critical
infrastructure piece that fell apart and was mostly handled by the infra team.
All of these things are getting fixed because they have to be, to make sure
development on master can continue not because those with a vested interest in
the stable branches working for 15 months are working on them.

The other aspect here are development efforts to make things more stable in this
space. Things like the effort to pin the requirements on stable branches which
Joe is spearheading. These are critical to the long term success of the stable
branches yet no one has stepped up to help with it.

I view this as a disconnect between what people think maintaining a stable
branch means and what it actually entails. Sure, the backporting of fixes to
intermittent failures is part of it. But, the most effort is spent on making
sure the gating machinery stays well oiled and doesn't breakdown.


[1] https://etherpad.openstack.org/p/stable-tracker

Matthew Treinish wrote:

So I think it's time we called the icehouse branch and marked it EOL. We
originally conditioned the longer support window on extra people stepping
forward to keep things working. I believe this latest issue is just the latest
indication that this hasn't happened. Issue 1 listed above is being caused by
the icehouse branch during upgrades. The fact that a stable release was pushed
at the same time things were wedged on the juno branch is just the latest
evidence to me that things aren't being maintained as they should be. Looking at
the #openstack-qa irc log from today or the etherpad about trying to sort this
issue should be an indication that no one has stepped up to help with the
maintenance and it shows given the poor state of the branch.

I disagree with the assessment. People have stepped up. I think the
stable branches are less often broken than they were, and stable branch
champions (as their tracking etherpad shows) have made a difference.
There just has been more issues as usual recently and they probably
couldn't keep track. It's not a fun job to babysit stable branches,
belittling the stable branch champions results is not the best way to
encourage them to continue in this position. I 

Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread Matthew Treinish
On Tue, Feb 10, 2015 at 11:50:28AM -0500, David Kranz wrote:
 On 02/10/2015 10:35 AM, Matthew Treinish wrote:
 On Tue, Feb 10, 2015 at 11:19:20AM +0100, Thierry Carrez wrote:
 Joe, Matt  Matthew:
 
 I hear your frustration with broken stable branches. With my
 vulnerability management team member hat, responsible for landing
 patches there with a strict deadline, I can certainly relate with the
 frustration of having to dive in to unbork the branch in the first
 place, rather than concentrate on the work you initially planned on doing.
 
 That said, wearing my stable team member hat, I think it's a bit unfair
 to say that things are worse than they were and call for dramatic
 action. The stable branch team put a structure in place to try to
 continuously fix the stable branches rather than reactively fix it when
 we need it to work. Those champions have been quite active[1] unbreaking
 it in the past months. I'd argue that the branch is broken much less
 often than it used to. That doesn't mean it's never broken, though, or
 that those people are magicians.
 I don't at all for 2 reasons. The first being in every discussion we had at 2
 summits I raised the increased maint. burden for a longer support window and
 was told that people were going to stand up so it wouldn't be an issue. I 
 have
 yet to see that happen. I have not seen anything to date that would convince
 me that we are at all ready to be maintaining 3 stable branches at once.
 
 The second is while I've seen that etherpad, I still view their still being a
 huge disconnect here about what actually maintaining the branches requires. 
 The
 issue which I'm raising is about issues related to the gating infrastructure 
 and
 how to ensure that things stay working. There is a non-linear overhead 
 involved
 with making sure any gating job stays working. (on stable or master) People 
 need
 to take ownership of jobs to make sure they keep working.
 
 One issue in the current situation is that the two groups (you and the
 stable maintainers) seem to work in parallel rather than collaborate.
 It's quite telling that the two groups maintained separate etherpads to
 keep track of the fixes that needed landing.
 I don't actually view it as that. Just looking at the etherpad it has a very
 small subset of the actual types of issues we're raising here.
 
 For example, there was a week in late Nov. when 2 consecutive oslo project
 releases broke the stable gates. After we unwound all of this and landed the
 fixes in the branches the next step was to changes to make sure we didn't 
 allow
 breakages in the same way:
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-November/051206.html
 
 This was also happened at the same time as a new testtools stack release 
 which
 broke every branch (including master). Another example is all of the 
 setuptools
 stack churn from the famed Christmas releases. That was another critical
 infrastructure piece that fell apart and was mostly handled by the infra 
 team.
 All of these things are getting fixed because they have to be, to make sure
 development on master can continue not because those with a vested interest 
 in
 the stable branches working for 15 months are working on them.
 
 The other aspect here are development efforts to make things more stable in 
 this
 space. Things like the effort to pin the requirements on stable branches 
 which
 Joe is spearheading. These are critical to the long term success of the 
 stable
 branches yet no one has stepped up to help with it.
 
 I view this as a disconnect between what people think maintaining a stable
 branch means and what it actually entails. Sure, the backporting of fixes to
 intermittent failures is part of it. But, the most effort is spent on making
 sure the gating machinery stays well oiled and doesn't breakdown.
 
 [1] https://etherpad.openstack.org/p/stable-tracker
 
 Matthew Treinish wrote:
 So I think it's time we called the icehouse branch and marked it EOL. We
 originally conditioned the longer support window on extra people stepping
 forward to keep things working. I believe this latest issue is just the 
 latest
 indication that this hasn't happened. Issue 1 listed above is being caused 
 by
 the icehouse branch during upgrades. The fact that a stable release was 
 pushed
 at the same time things were wedged on the juno branch is just the latest
 evidence to me that things aren't being maintained as they should be. 
 Looking at
 the #openstack-qa irc log from today or the etherpad about trying to sort 
 this
 issue should be an indication that no one has stepped up to help with the
 maintenance and it shows given the poor state of the branch.
 I disagree with the assessment. People have stepped up. I think the
 stable branches are less often broken than they were, and stable branch
 champions (as their tracking etherpad shows) have made a difference.
 There just has been more issues as usual recently and they probably
 couldn't keep track. 

Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Dean Troyer
On Tue, Feb 10, 2015 at 9:20 AM, Kevin Bringard (kevinbri) 
kevin...@cisco.com wrote:

 ATC is only being given to folks committing to the current branch (
 https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/
 ).



 Secondly, it's difficult to get stack-analytics credit for back ports, as
 the preferred method is to cherry pick the code, and that keeps the
 original author's name.



 My fear is that we're going in a direction where trunk is the sole focus
 and we're subsequently going to lose the support of the majority of the
 operators and enterprises at which point we'll be a fun research project,
 but little more.


[I've cherry-picked above what I think are the main points here... not
directed at you Kevin.]


This is not Somebody Else's Problem.

Stable maintenance is Not Much Fun, no question.  Those who have demanded
the loudest that we (the development community) maintain these stable
branches need to be the one supporting it the most. (I have no idea how
that matches up today, so I'm not pointing out anyone in particular.)

* ATC credit should be given, stable branch maintenance is a contribution
to the project, no question.

* I have a bit of a problem with stack-analytics being an issue partially
because that is not what should be driving corporate contributions and
resource allocation.  But it does.  Relying on a system with known
anomalies like the cherry-pick problem gets imperfect results.

* The vast majority of the OpenStack contributors are paid to do their work
by a (most likely) Foundation member company.  These companies choose how
to allocate their resources, some do quite well at scratching their
particular itches, some just make a lot of noise.  If fun is what drives
them to select where they apply resources, then they will reap what they
sow.

The voice of operators/users/deployers in this conversation should be
reflected through the entity that they are paying to provide operational
cloud services.  It's those directly consuming the code from openstack.org
that are responsible here because they are the ones directly making money
by either providing public/private cloud services, or reselling a
productized OpenStack or providing consulting services and the like.

This should not stop users/operators from contributing information,
requirements or code in any way.  But if they have to go around their
vendor then that vendor has failed them.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Alexander Tivelkov
Thanks Monty!

Yup, probably I've missed that. I was looking at pbr and its version
implementation, but didn't realize that this is actually a fusion of
semver and pep440.

So, we have this as an extra alternative to choose from.

It would be an obvious choice if we were just looking for some common
solution to version objects within openstack. However, I am a bit
concerned about applying it to Artifact Repository. As I wrote before,
we are trying to make the Repository to be language- and
platform-agnostic tool for other developers, including the ones
originating from non-python and non-openstack worlds. Having a
versioning notation which is non-standard for everybody but openstack
developers does not look like a good idea to me.
--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 6:55 PM, Monty Taylor mord...@inaugust.com wrote:
 On 02/10/2015 10:28 AM, Alexander Tivelkov wrote:
 Hi folks,

 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.

 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking advice
 on this choice.

 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.

 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.

 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.

 According to my initial vision, the Artifact Repository should be as
 generic as possible in terms of potential adoption. The artifacts were
 never supposed to be python packages only, and even the projects which
 will create and use these artifacts are not mandatory limited to be
 pythonic, the developers of that projects may not be python
 developers! So, I'd really wanted to avoid any python-specific
 notations, such as PEP-440 for artifacts.

 I've put this vision into a spec [3] which also contains a proposal on
 how to convert the semver-compatible version strings into the
 comparable values which may be mapped to database types, so a database
 table may be queried, ordered and filtered by the object version.

 So, we need some feedback on this topic. Would you prefer artifacts to
 be versioned with SemVer or with PEP-440 notation? Are you interested
 in having some generic utility which will map versions (in either
 format) to database columns? If so, which version format would you
 prefer?

 We are on a tight schedule here, as we want to begin landing
 artifact-related code soon. So, I would appreciate your feedback
 during this week: here in the ML or in the comments to [3] review.

 Because you should have more things to look at:

 http://docs.openstack.org/developer/pbr/semver.html

 We've already done some work to try to reconcile the world of semver
 with the world of PEP440 over in PBR land.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Jeremy Stanley
On 2015-02-10 15:20:46 + (+), Kevin Bringard (kevinbri) wrote:
[...]
 I've been talking with a few people about this very thing lately,
 and I think much of it is caused by what appears to be our
 actively discouraging people from working on it. Most notably, ATC
 is only being given to folks committing to the current branch
 (https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).

The comments on that answer are somewhat misleading, so I'll follow
up there as well to set the record straight. The script[1] which
identifies ATCs for the purpose of technical elections and summit
passes is based entirely on Gerrit owners (uploaders) of changes
merged to official projects within a particular time period. It
doesn't treat master differently from any other branches. People who
do the work to upload backports to stable branches absolutely do get
counted for this purpose. People who only review changes uploaded by
others do not (unless they are manually added to the extra-atcs
file in the openstack/governance repo), but that is the case for all
branches including master so not something stable-branch specific.

Though I *personally* hope that is not the driving force to convince
people to work on stable support. If it is, then we've already lost
on this front.

 Secondly, it's difficult to get stack-analytics credit for back
 ports, as the preferred method is to cherry pick the code, and
 that keeps the original author's name. I've personally gotten a
 few commits into stable, but have nothing to show for it in
 stack-analytics (if I'm doing it wrong, I'm happy to be
 corrected).
[...]

Stackalytics isn't an official OpenStack project, but you should
file a bug[2] against it if there's a feature you want its authors
to consider adding.

[1] 
https://git.openstack.org/cgit/openstack-infra/system-config/tree/tools/atc/email_stats.py
[2] https://bugs.launchpad.net/stackalytics/+filebug
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate) [metrics]

2015-02-10 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

The voice of operators/users/deployers in this conversation should be reflected 
through the entity that they are paying to provide operational cloud services.  

Let’s be careful here: I hope you didn’t mean to say that 
operators/users/deployers voices should only be heard when they pay a vendor to 
get OpenStack (and I don’t think you did, but it read that way a bit).  

It's those directly consuming the code from openstack.org that are responsible 
here because they are the ones directly making money by either providing 
public/private cloud services, or reselling a productized OpenStack or 
providing consulting services and the like.

Sure, I agree those folks certainly have an interest...but I don’t believe it’s 
solely their responsibility and that the development community has none.  If 
the development community has no responsibility for maintaining stable code, 
why have stable branches at all?  If we aren’t incentivizing contributions to 
stable code, we’re encouraging forking, IMHO.  There’s a balance to be struck 
here.  I think what’s being voiced in this thread is that we haven’t gotten to 
that place yet where there are good incentives to contribute to stable branch 
(not just back porting fixes, but dealing with gate problems, etc as well) and 
we’d like to figure out how to improve that situation.

At Your Service,

Mark T. Voelker
OpenStack Architect

On Feb 10, 2015, at 11:21 AM, Dean Troyer dtro...@gmail.com wrote:

On Tue, Feb 10, 2015 at 9:20 AM, Kevin Bringard (kevinbri) kevin...@cisco.com 
wrote:
ATC is only being given to folks committing to the current branch 
(https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).
 
Secondly, it's difficult to get stack-analytics credit for back ports, as the 
preferred method is to cherry pick the code, and that keeps the original 
author's name.
 
My fear is that we're going in a direction where trunk is the sole focus and 
we're subsequently going to lose the support of the majority of the operators 
and enterprises at which point we'll be a fun research project, but little more.

[I've cherry-picked above what I think are the main points here... not directed 
at you Kevin.]


This is not Somebody Else's Problem.

Stable maintenance is Not Much Fun, no question.  Those who have demanded the 
loudest that we (the development community) maintain these stable branches need 
to be the one supporting it the most. (I have no idea how that matches up 
today, so I'm not pointing out anyone in particular.) 

* ATC credit should be given, stable branch maintenance is a contribution to 
the project, no question.

* I have a bit of a problem with stack-analytics being an issue partially 
because that is not what should be driving corporate contributions and resource 
allocation.  But it does.  Relying on a system with known anomalies like the 
cherry-pick problem gets imperfect results.

* The vast majority of the OpenStack contributors are paid to do their work by 
a (most likely) Foundation member company.  These companies choose how to 
allocate their resources, some do quite well at scratching their particular 
itches, some just make a lot of noise.  If fun is what drives them to select 
where they apply resources, then they will reap what they sow.

The voice of operators/users/deployers in this conversation should be reflected 
through the entity that they are paying to provide operational cloud services.  
It's those directly consuming the code from openstack.org that are responsible 
here because they are the ones directly making money by either providing 
public/private cloud services, or reselling a productized OpenStack or 
providing consulting services and the like.

This should not stop users/operators from contributing information, 
requirements or code in any way.  But if they have to go around their vendor 
then that vendor has failed them.

dt

- -- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJU2kAYAAoJELUJLUWGN7CbJpEQAJJmw/RlgRMxVSyuWZpcvJmn
nWVLw3KJTspGz5V9clFG6QYgDR5ygWNz51e+QT0Ps+baOWxq9Figwur2yyYfOt8c
hissfjL+OPG2QRrAQOwys7n2s5bSb28ePvO+/benY1PykAknpZX4lq28V4gfqRX9
+2ZjxNs+orkj0cuDi1KdhZPjfo1fuZ07GgCjFy1Xuqq7rYBLfrkEffmF2dbxF67R
aHxbTWe9CgX6IlTL1p5rH7wCKyYQrkFUEXXG3w3CpdWJQ4Ky2bvC/JyKX8wFOkqp
AL7voCvG0zwPV+BHBVabqobC7qb+7+uYd6IQRovuxVxcl9ZdI0o9GQyhcXf890IQ
Sq8Z+PApzOvd8pxQdpq0SBYlVZUsjuBe3YGfvmHg/3hyto+FUzKhJOfjfLepST5U
tThEQzSnfZwfgnTkI3/rN9zMvlg7vsP15lRJzRx+ycQhyzTJZdlEiA+yIQF3tdrZ
h1ZW1M4Dc9R/R56jRV3YeSxY1wMUlBHm4Gn+uKVS3q/dKjIOkYjJEEDVcDd/wCYh
Uknp5GFsZi6Uvj0Dgymllrk7HCustzEhog4r0mORH6W0HtYVK0U/NvaRFXkF/VS0

Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Ian Cordasco


On 2/10/15, 10:35, Alexander Tivelkov ativel...@mirantis.com wrote:

Thanks Monty!

Yup, probably I've missed that. I was looking at pbr and its version
implementation, but didn't realize that this is actually a fusion of
semver and pep440.

So, we have this as an extra alternative to choose from.

It would be an obvious choice if we were just looking for some common
solution to version objects within openstack. However, I am a bit
concerned about applying it to Artifact Repository. As I wrote before,
we are trying to make the Repository to be language- and
platform-agnostic tool for other developers, including the ones
originating from non-python and non-openstack worlds. Having a
versioning notation which is non-standard for everybody but openstack
developers does not look like a good idea to me.
--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 6:55 PM, Monty Taylor mord...@inaugust.com
wrote:
 On 02/10/2015 10:28 AM, Alexander Tivelkov wrote:
 Hi folks,

 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.

 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking advice
 on this choice.

 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.

 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.

 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.

 According to my initial vision, the Artifact Repository should be as
 generic as possible in terms of potential adoption. The artifacts were
 never supposed to be python packages only, and even the projects which
 will create and use these artifacts are not mandatory limited to be
 pythonic, the developers of that projects may not be python
 developers! So, I'd really wanted to avoid any python-specific
 notations, such as PEP-440 for artifacts.

 I've put this vision into a spec [3] which also contains a proposal on
 how to convert the semver-compatible version strings into the
 comparable values which may be mapped to database types, so a database
 table may be queried, ordered and filtered by the object version.

 So, we need some feedback on this topic. Would you prefer artifacts to
 be versioned with SemVer or with PEP-440 notation? Are you interested
 in having some generic utility which will map versions (in either
 format) to database columns? If so, which version format would you
 prefer?

 We are on a tight schedule here, as we want to begin landing
 artifact-related code soon. So, I would appreciate your feedback
 during this week: here in the ML or in the comments to [3] review.

 Because you should have more things to look at:

 http://docs.openstack.org/developer/pbr/semver.html

 We've already done some work to try to reconcile the world of semver
 with the world of PEP440 over in PBR land.

 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

So Semantic Versioning, as I’ve already mentioned in the past, isn’t
really a de facto standard in any language community but it is a language
agnostic proposal. That said, just because it’s language agnostic does not
mean it won’t conflict with other language’s versioning semantics. Since
we’re effectively reinventing an existing open source solution here, I
think we should look to how Artifactory [1] handles this.

I haven’t used artifactory very much but a cursory look makes it apparent
that it is strongly decoupling the logic of 

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-10 Thread M Ranga Swami Reddy
Hi Alex,
That's great. I think we should start the work without delay.
I have just created the ec2 api sub team wiki page in nova active
sub-teams list.

https://wiki.openstack.org/wiki/Nova#Active_Sub-teams:
https://wiki.openstack.org/wiki/Meetings/EC2API

For the ec2 api weekly meeting, I have already sent the voting sheet
to openstack-dev mailing list. Based on the majority,
we will fix the meeting time.
(http://lists.openstack.org/pipermail/openstack-dev/2015-February/056446.html)

PS: Will update the https://wiki.openstack.org/wiki/Meetings page,
once the meeting time finalized.

Thanks
Swami

On Tue, Feb 10, 2015 at 2:15 PM, Alexandre Levine
alev...@cloudscaling.com wrote:
 Ok, cool. Looking forward to it.

 Best regards,
   Alex Levine


 On 2/10/15 5:25 AM, M Ranga Swami Reddy wrote:

 Hi Alex Levine (you can address me  'Swami'),
 Thank you. I have been working on this EC2 APIs quite some time. We
 will work closely together on this project for reviews, code cleanup,
 bug fixing and other critical items. Currently am looking for our sub
 team meeting slot. Once I get the meeting slot will update the wiki
 with meeting details along with the first meeting agenda. Please feel
 free to add more to the meeting agenda.

 Thanks
 Swami

 On Mon, Feb 9, 2015 at 9:50 PM, Alexandre Levine
 alev...@cloudscaling.com wrote:

 Hey M Ranga Swami Reddy (sorry, I'm not sure how to address you shorter
 :)
 ),

 After conversation in this mailing list with Michael Still I understood
 that
 I'll do the sub group and meetings stuff, since I lead the ec2-api in
 stackforge anyways. Of course I'm not that familiar with these processes
 in
 nova yet, so if you're sure that you want to take the lead for nova's
 part
 of EC2, I won't be objecting much. Please let me know what you think.

 Best regards,
Alex Levine


 On 2/9/15 4:41 PM, M Ranga Swami Reddy wrote:

 Hi All,
 I will be creating the a sub group in Nova for EC2 APIs and start the
 weekly meetings, reviews, code cleanup, etc tasks.
 Will update the same on wiki page also soon..

 Thanks
 Swami

 On Fri, Feb 6, 2015 at 9:27 PM, David Kranz dkr...@redhat.com wrote:

 On 02/06/2015 07:49 AM, Sean Dague wrote:

 On 02/06/2015 07:39 AM, Alexandre Levine wrote:

 Rushi,

 We're adding new tempest tests into our stackforge-api/ec2-api. The
 review will appear in a couple of days. These tests will be good for
 running against both nova/ec2-api and stackforge/ec2-api. As soon as
 they are there, you'll be more than welcome to add even more.

 Best regards,
  Alex Levine

 Honestly, I'm more more pro having the ec2 tests in a tree that isn't
 Tempest. Most Tempest reviewers aren't familiar with the ec2 API,
 their
 focus has been OpenStack APIs.

 Having a place where there is a review team that is dedicated only to
 the EC2 API seems much better.

   -Sean

 +1

And once similar coverage to the current tempest ec2 tests is
 achieved,
 either by copying from tempest or creating anew, we should remove the
 ec2
 tests from tempest.

-David





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread Jeremy Stanley
On 2015-02-10 11:50:28 -0500 (-0500), David Kranz wrote:
[...]
 I would rather give up branchless tempest than the ability for
 real distributors/deployers/operators to collaborate on stable
 branches.
[...]

Keep in mind that branchless tempest came about in part due to
downstream use cases as well, not merely as a means to simplify our
testing implementation. Specifically, the interoperability (defcore,
refstack) push was for a testing framework and testset which could
work against multiple deployed environments regardless of what
release(s) they're running and without having to decide among
multiple versions of a tool to do so (especially since they might be
mixing components from multiple OpenStack integrated releases at any
given point in time).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread David Kranz

On 02/10/2015 12:20 PM, Jeremy Stanley wrote:

On 2015-02-10 11:50:28 -0500 (-0500), David Kranz wrote:
[...]

I would rather give up branchless tempest than the ability for
real distributors/deployers/operators to collaborate on stable
branches.

[...]

Keep in mind that branchless tempest came about in part due to
downstream use cases as well, not merely as a means to simplify our
testing implementation. Specifically, the interoperability (defcore,
refstack) push was for a testing framework and testset which could
work against multiple deployed environments regardless of what
release(s) they're running and without having to decide among
multiple versions of a tool to do so (especially since they might be
mixing components from multiple OpenStack integrated releases at any
given point in time).
Yes, but that goes out the window in the real world because tempest is 
not really branchless when we periodically
throw out older releases, as we must.  And the earlier we toss out 
things like icehouse, the less branchless it is from the 
interoperability perspective.
Also, tempest is really based on api versions of services, not 
integrated releases, so I'm not sure where mixing components comes into 
play.


In any event, this is a tradeoff and since refstack or whomever has to 
deal with releases that are no longer supported upstream anyway,
they could just do whatever the solution is from the get-go. That said, 
I feel like the current situation is caused by a perfect storm of 
branchless tempest, unpinned versions, and running multiple releases on 
the same machine so there could be other ways to untangle things. I just 
think it is a bad idea to throw the concept of stable branches overboard 
just because the folks who care about it can't deal with the current 
complexity. Once we simplify it, some way or another, I am sure more 
folks will step up or those who have already can get more done.


 -David


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Jeremy Stanley
On 2015-02-10 10:21:58 -0600 (-0600), Dean Troyer wrote:
[...]
 ATC credit should be given, stable branch maintenance is a
 contribution to the project, no question.
[...]

Just to keep this particular misconception from spinning out of
control, as I mentioned elsewhere in this thread they absolutely
already do. Stable branch changes are treated no differently from
any other Gerrit changes in this regard.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate) [metrics]

2015-02-10 Thread Stefano Maffulli
On Tue, 2015-02-10 at 15:20 +, Kevin Bringard (kevinbri) wrote:
 I've been talking with a few people about this very thing lately, and
 I think much of it is caused by what appears to be our actively
 discouraging people from working on it. Most notably, ATC is only
 being given to folks committing to the current branch
 (https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).

As Jeremy clarified, this is wrong. I edited the answer to be even more
explicit.

  Secondly, it's difficult to get stack-analytics credit for back
 ports, as the preferred method is to cherry pick the code, and that
 keeps the original author's name. I've personally gotten a few commits
 into stable, but have nothing to show for it in stack-analytics (if
 I'm doing it wrong, I'm happy to be corrected)

First I want to clarify that git history on
http://activity.openstack.org visualizes commits (merged changes, more
properly) to all branches, not just trunk.

That said, we probably still miss some attribution there because we
count committers only by looking at the author of a change. If
backports are cherry-picked and therefore retain the author then the new
owner is not *counted* as a new contributor.

I highlight that the scope of Activity Board is not to create vanity
charts but only to highlight trends that are useful to understand the
health of the community. It never had any intention to be precise
because 100% precision is hard.

That said, I'm adding the metrics tag because if there is a way to add
owners of back-ports to the count of contributors to OpenStack that'd be
good.

And if we want to improve the number of contributors to stable release,
we may even create a new panel to show such trend.  Do you agree we need
to look in detail, separately to contributors to stable and to trunk
instead of one blob?

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Ilya Shakhat

  Secondly, it's difficult to get stack-analytics credit for back
  ports, as the preferred method is to cherry pick the code, and
  that keeps the original author's name. I've personally gotten a
  few commits into stable, but have nothing to show for it in
  stack-analytics (if I'm doing it wrong, I'm happy to be
  corrected).
 [...]
 Stackalytics isn't an official OpenStack project, but you should
 file a bug[2] against it if there's a feature you want its authors
 to consider adding.


Stackalytics tracks commits into stable branches, e.g. for Neutron
stable/juno they are visible at
http://stackalytics.com/?metric=commitsmodule=neutronrelease=juno.
Commits are also shown in activity log for specific project or person, so
if someone interested in pulling them into weekly report - they will be
there.

Thanks,
Ilya

2015-02-10 19:45 GMT+03:00 Jeremy Stanley fu...@yuggoth.org:

 On 2015-02-10 15:20:46 + (+), Kevin Bringard (kevinbri) wrote:
 [...]
  I've been talking with a few people about this very thing lately,
  and I think much of it is caused by what appears to be our
  actively discouraging people from working on it. Most notably, ATC
  is only being given to folks committing to the current branch
  (
 https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/
 ).

 The comments on that answer are somewhat misleading, so I'll follow
 up there as well to set the record straight. The script[1] which
 identifies ATCs for the purpose of technical elections and summit
 passes is based entirely on Gerrit owners (uploaders) of changes
 merged to official projects within a particular time period. It
 doesn't treat master differently from any other branches. People who
 do the work to upload backports to stable branches absolutely do get
 counted for this purpose. People who only review changes uploaded by
 others do not (unless they are manually added to the extra-atcs
 file in the openstack/governance repo), but that is the case for all
 branches including master so not something stable-branch specific.

 Though I *personally* hope that is not the driving force to convince
 people to work on stable support. If it is, then we've already lost
 on this front.

  Secondly, it's difficult to get stack-analytics credit for back
  ports, as the preferred method is to cherry pick the code, and
  that keeps the original author's name. I've personally gotten a
  few commits into stable, but have nothing to show for it in
  stack-analytics (if I'm doing it wrong, I'm happy to be
  corrected).
 [...]

 Stackalytics isn't an official OpenStack project, but you should
 file a bug[2] against it if there's a feature you want its authors
 to consider adding.

 [1]
 https://git.openstack.org/cgit/openstack-infra/system-config/tree/tools/atc/email_stats.py
 [2] https://bugs.launchpad.net/stackalytics/+filebug
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread Thierry Carrez
James E. Blair wrote:
 Thierry Carrez thie...@openstack.org writes:
 
 I also disagree with the proposed solution. We announced a support
 timeframe for Icehouse, our downstream users made plans around it, so we
 should stick to it as much as we can.
 
 To be fair, if we did that, we did not communicate accurately the
 sentiment of the room at the Kilo summit.  From:
 
   https://etherpad.openstack.org/p/kilo-summit-ops-stable-branch
 
   Stable branches starting with stable/icehouse are intended to be
   supported for 15 months (previously it was 9 months), but it depends
   on the community dedicating resources to maintain stable/*
 
 There was definitely skepticism in that room.  I would characterize it
 as something like some people wanted 15 months and other people said
 that is unlikely to happen based on our track record.  I think the
 consensus was akin to okay, we'll try it and see what happens but no
 promises.

I think you are confusing two etherpads there. That was the Ops summit
session etherpad (I was in that room, and tried to encourage people to
step up there by adding that note to the etherpad).

I suspect the room you want to express the sentiment or skepticism
from was the Design Summit one, which had *this* etherpad:

https://etherpad.openstack.org/p/kilo-relmgt-stable-branches

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [devstack] configuring https for glance client

2015-02-10 Thread Andrew Lazarev
This doesn't look flexible for me. Glance and keystone could use different
settings for SSL. I like current way to use session and config section for
each separate client (like [1]).

[1] https://review.openstack.org/#/c/131098/

Thanks,
Andrew.

On Mon, Feb 9, 2015 at 6:19 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 2/9/2015 5:40 PM, Andrew Lazarev wrote:

 Hi Nova experts,

 Some time ago I figured out that devstack fails to stack with
 USE_SSL=True option because it doesn't configure nova to work with
 secured glace [1]. Support of secured glance was added to nova in Juno
 cycle [2], but it looks strange for me.

 Glance client takes settings form '[ssl]' section. The same section is
 used to set up nova server SSL settings. Other clients have separate
 sections in the config file (and switching to session use now),  e.g.
 related code for cinder - [3].

 I've created quick fix for the devstack - [4], but it would be nice to
 shed a light on nova plans around glance config before merging a
 workaround for devstack.

 So, the questions are:
 1. Is it normal that glance client reads from '[ssl]' config section?
 2. Is there a plan to move glance client to sessions use and move
 corresponding config section to '[glance]'?
 3. Are any plans to run CI for USE_SSL=True use case?

 [1] - https://bugs.launchpad.net/devstack/+bug/1405484
 [2] - https://review.openstack.org/#/c/72974
 [3] -
 https://github.com/openstack/nova/blob/2015.1.0b2/nova/
 volume/cinder.py#L73
 [4] - https://review.openstack.org/#/c/153737

 Thanks,
 Andrew.


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 This came up in another -dev thread at one point which prompted a series
 from Matthew Gilliard [1] to use [ssl] globally or project-specific options
 since both glance and keystone are currently getting their ssl options from
 the global [ssl] group in nova right now.

 I've been a bad citizen and haven't gotten back to the series review yet.

 [1] https://review.openstack.org/#/q/status:open+project:
 openstack/nova+branch:master+topic:ssl-config-options,n,z

 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nova api.fault notification isn't collected by ceilometer

2015-02-10 Thread yuntong


On 2015年02月10日 11:34, yuntong wrote:


On 2015年02月10日 05:12, gordon chung wrote:

 In nova api, a nova api.fault notification will be send out when the when 
there
 is en error.
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n119
 but i couldn't find where they are  processed  in ceilometer,
 an error notification can be very desired to be collected, do we have plan to
 add this, shall i need a bp to do that ?
there's a patch for review to store error info: 
https://review.openstack.org/#/c/153362/



This looks good,
but how do we do with other priority message like WARN ?

-yuntong

cheers,
/gord/

Yep, that's what i'm looking for, thanks,
another notification from nova that is missed in ceilometer is info 
from nova api:

https://github.com/openstack/nova/blob/master/nova/notifications.py#L64
this notify_decorator will decorate every nova/ec2 rest api and send 
out a notification for each api actions:

https://github.com/openstack/nova/blob/master/nova/utils.py#L526
from which will send out notification like: %s.%s.%s % (module, key, 
method) ,

and no notification plugin in ceilometer to deal with them.
Let me know if i should file a bug for this.
Thanks,

-yuntong




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] CI failed with 401 Unauthorized when build devstack

2015-02-10 Thread liuxinguo
I updated the lib/swift file and the devstack build failded with the same error 
when uploading image.

I commented the script in stack.sh to disable the uploading image procedure, 
but the build still failed when running cinder type-create default , the 
error is still  Unauthorized :

2015-02-10 07:10:03.254 | + is_service_enabled c-api
2015-02-10 07:10:03.254 | + local service=c-vol
2015-02-10 07:10:03.254 | + local 'command=/usr/local/bin/cinder-volume 
--config-file /etc/cinder/cinder.conf'
2015-02-10 07:10:03.254 | + local group=
2015-02-10 07:10:03.254 | + exec
2015-02-10 07:10:03.254 | + exec
2015-02-10 07:10:03.259 | + return 0
2015-02-10 07:10:03.259 | + is_service_enabled tls-proxy
2015-02-10 07:10:03.365 | + return 1
2015-02-10 07:10:03.365 | + create_volume_types
2015-02-10 07:10:03.365 | + is_service_enabled c-api
2015-02-10 07:10:03.393 | + return 0
2015-02-10 07:10:03.393 | + [[ -n lvm:default ]]
2015-02-10 07:10:03.393 | + local be be_name be_type
2015-02-10 07:10:03.393 | + for be in '${CINDER_ENABLED_BACKENDS//,/ }'
2015-02-10 07:10:03.393 | + be_type=lvm
2015-02-10 07:10:03.393 | + be_name=default
2015-02-10 07:10:03.393 | + cinder type-create default
2015-02-10 07:10:11.416 | ERROR: Unauthorized (HTTP 401) (Request-ID: 
req-237ec280-2cda-4864-ac9a-d708d4e4de70)
2015-02-10 07:10:11.459 | ++ err_trap
2015-02-10 07:10:11.460 | ++ local r=1
2015-02-10 07:10:11.460 | stack.sh failed: full log in 
/opt/stack/new/devstacklog.txt.2015-02-09-230158

In c-api.log, I found this:
2015-02-09 23:10:07.751 3894 ERROR keystonemiddleware.auth_token [-] HTTP 
connection exception: SSL exception connecting to https://127.0.0.1:35357/
2015-02-09 23:10:07.751 3894 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token

I think there is something wrong with the keyston, but I don't know how to deal 
with it, all the source files have updated to the latest.


Date: Fri, 6 Feb 2015 13:11:13 +
From: Bob Ball bob.b...@citrix.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
openstack-in...@lists.openstack.org
openstack-in...@lists.openstack.org
Cc: Zhangli \(ISSP\) zhangl...@huawei.com, Fanyaohong
fanyaoh...@huawei.com, Chenzongliang chenzongli...@huawei.com
Subject: Re: [openstack-dev] [OpenStack-Infra] Devstack error when
running g-reg: 401 Unauthorized
Message-ID:
bb824ea959b82f43820ffee5e6b00aa625ad2...@amspex01cl01.citrite.net
Content-Type: text/plain; charset=cp1256

This is likely https://launchpad.net/bugs/1415795 which is fixed by 
https://review.openstack.org/#/c/151506/

Make sure you have the above change in your devstack and it should work again.

Bob

From: liuxinguo [mailto:liuxin...@huawei.com]
Sent: 06 February 2015 03:08
To: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org
Cc: Zhangli (ISSP); Fanyaohong; Chenzongliang
Subject: [openstack-dev] [OpenStack-Infra] Devstack error when running g-reg: 
401 Unauthorized

Our CI get the following error when build devstack, begin from service ?g-reg? 
when uploading image:

is_service_enabled g-reg
2015-02-05 03:14:54.966 | + return 0
2015-02-05 03:14:54.968 | ++ keystone token-get
2015-02-05 03:14:54.968 | ++ grep ' id '
2015-02-05 03:14:54.969 | ++ get_field 2
2015-02-05 03:14:54.970 | ++ local data field
2015-02-05 03:14:54.970 | ++ read data
2015-02-05 03:14:55.797 | ++ '[' 2 -lt 0 ']'
2015-02-05 03:14:55.798 | ++ field='$3'
2015-02-05 03:14:55.799 | ++ echo '| id| 
9660a765e04d4d0a8bc3f0f44b305161 |'
2015-02-05 03:14:55.800 | ++ awk '-F[ \t]*\\|[ \t]*' '{print $3}'
2015-02-05 03:14:55.802 | ++ read data
2015-02-05 03:14:55.804 | + TOKEN=9660a765e04d4d0a8bc3f0f44b305161
2015-02-05 03:14:55.804 | + die_if_not_set 1137 TOKEN 'Keystone fail to get 
token'
2015-02-05 03:14:55.804 | + local exitcode=0
2015-02-05 03:14:55.810 | + echo_summary 'Uploading images'
2015-02-05 03:14:55.810 | + [[ -t 3 ]]
2015-02-05 03:14:55.810 | + [[ True != \T\r\u\e ]]
2015-02-05 03:14:55.810 | + echo -e Uploading images
2015-02-05 03:14:55.810 | + [[ -n '' ]]
2015-02-05 03:14:55.810 | + for image_url in '${IMAGE_URLS//,/ }'
2015-02-05 03:14:55.811 | + upload_image 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz 
9660a765e04d4d0a8bc3f0f44b305161
2015-02-05 03:14:55.811 | + local 
image_url=http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.811 | + local token=9660a765e04d4d0a8bc3f0f44b305161
2015-02-05 03:14:55.811 | + local image image_fname image_name
2015-02-05 03:14:55.811 | + mkdir -p /opt/stack/new/devstack/files/images
2015-02-05 03:14:55.813 | ++ basename 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.815 | + image_fname=cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.815 | + [[ 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz != file* 
]]
2015-02-05 

[openstack-dev] [manila] two level share scheduling

2015-02-10 Thread Jason Bishop
Hi manila, I would like to broach the subject of share load balancing.
Currently the share server for an (in this case) NFS share that is newly
created is determined at share creation time.  In this proposal, the share
server is determined late binding style at mount-time instead.

For the sake of discussion, lets call the proposed idea two-level share
scheduling.

TL;DR remove share server from export_location in database and query the
driver for this at mount-time


First, a quick description of current behavior:


When a share is created (from scratch), the manila scheduler identifies a
share server from its list of backends and makes an api call to
create_share method in the appropriate driver.  The driver executes the
required steps and returns the export_location which is then written to the
database.


For example, this create command:

$ manila create --name myshare --share-network
fb7ea7de-19fb-4650-b6ac-16f918e66d1d NFS 1


would result in this

$ manila list

+--+-+--+-+---+-+---+-+

| ID   | Name| Size | Share Proto |
Status| Volume Type | Export location
| Host|

+--+-+--+-+---+-+---+-+

| 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21 | myshare | 1| NFS |
available | None|
10.254.0.3:/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21
| jasondevstack@generic1#GENERIC1 |

+--+-+--+-+---+-+---+-+


with this associated database record:


mysql select * from shares\G

*** 1. row ***

 created_at: 2015-02-10 07:06:21

 updated_at: 2015-02-10 07:07:25

 deleted_at: NULL

deleted: False

 id: 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21

user_id: 848b808e91e5462f985b6131f8a905e8

 project_id: ed01cbf358f74ff08263f9672b2cdd01

   host: jasondevstack@generic1#GENERIC1

   size: 1

  availability_zone: nova

 status: available

   scheduled_at: 2015-02-10 07:06:21

launched_at: 2015-02-10 07:07:25

  terminated_at: NULL

   display_name: myshare

display_description: NULL

snapshot_id: NULL

   share_network_id: fb7ea7de-19fb-4650-b6ac-16f918e66d1d

share_server_id: c2602adb-0602-4128-9d1c-4024024a069a

share_proto: NFS

export_location: 10.254.0.3:
/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21

 volume_type_id: NULL

1 row in set (0.00 sec)




Proposed scheme:

The proposal is simple in concept.  Instead of the driver
(GenericShareDriver for example) returning both share server ip address and
path in share export_location, only the path is returned and saved in the
database.  The binding of the share server ip address is only determined at
share mount time.  In practical terms this means share server is determined
by an api call to the driver when _get_shares is called.  The driver would
then have the option of determining which IP address from its basket of
addresses to return.  In this way, each share mount event presents an
opportunity for the NFS traffic to be balanced over all available network
endpoints.

A possible signature for this new call might look like this (with the
GenericShareDriver having the simple implementation of return server[
'public_address']):


def get_share_server_address(self, ctx, share, share_server):

Return the IP address of a share server for given share, given
current .

# implementation dependent logic to determine IP address

address = self._myownloadfilter()

return address



Off the top of my head I see potential uses including:

o balance load over several glusterfs servers

o balance load over several NFS/CIFS share servers which have multiple
NICS

o balance load over several generic share servers which are exporting
read-only volumes (such as software repositories)

o i think isilon should also benefit but I will defer to somebody more
knowledgable on the subject


I see following cons:

   o slow manila list performance

   o very slow manila list performance if all share drivers are busy doing
long operations such as create/delete share


Interested in your thoughts

Jason
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-10 Thread Alexandre Levine

Ok, cool. Looking forward to it.

Best regards,
  Alex Levine

On 2/10/15 5:25 AM, M Ranga Swami Reddy wrote:

Hi Alex Levine (you can address me  'Swami'),
Thank you. I have been working on this EC2 APIs quite some time. We
will work closely together on this project for reviews, code cleanup,
bug fixing and other critical items. Currently am looking for our sub
team meeting slot. Once I get the meeting slot will update the wiki
with meeting details along with the first meeting agenda. Please feel
free to add more to the meeting agenda.

Thanks
Swami

On Mon, Feb 9, 2015 at 9:50 PM, Alexandre Levine
alev...@cloudscaling.com wrote:

Hey M Ranga Swami Reddy (sorry, I'm not sure how to address you shorter :)
),

After conversation in this mailing list with Michael Still I understood that
I'll do the sub group and meetings stuff, since I lead the ec2-api in
stackforge anyways. Of course I'm not that familiar with these processes in
nova yet, so if you're sure that you want to take the lead for nova's part
of EC2, I won't be objecting much. Please let me know what you think.

Best regards,
   Alex Levine


On 2/9/15 4:41 PM, M Ranga Swami Reddy wrote:

Hi All,
I will be creating the a sub group in Nova for EC2 APIs and start the
weekly meetings, reviews, code cleanup, etc tasks.
Will update the same on wiki page also soon..

Thanks
Swami

On Fri, Feb 6, 2015 at 9:27 PM, David Kranz dkr...@redhat.com wrote:

On 02/06/2015 07:49 AM, Sean Dague wrote:

On 02/06/2015 07:39 AM, Alexandre Levine wrote:

Rushi,

We're adding new tempest tests into our stackforge-api/ec2-api. The
review will appear in a couple of days. These tests will be good for
running against both nova/ec2-api and stackforge/ec2-api. As soon as
they are there, you'll be more than welcome to add even more.

Best regards,
 Alex Levine


Honestly, I'm more more pro having the ec2 tests in a tree that isn't
Tempest. Most Tempest reviewers aren't familiar with the ec2 API, their
focus has been OpenStack APIs.

Having a place where there is a review team that is dedicated only to
the EC2 API seems much better.

  -Sean


+1

   And once similar coverage to the current tempest ec2 tests is achieved,
either by copying from tempest or creating anew, we should remove the ec2
tests from tempest.

   -David




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request

2015-02-10 Thread Daniel P. Berrange
On Fri, Feb 06, 2015 at 10:47:29AM -0500, Solly Ross wrote:
 Hi,
 
 I would like to request a feature freeze exception for the
 Websockify security proxy framework blueprint [1].
 
 The blueprint introduces a framework for defining security drivers for the
 connections between the websocket proxy and the hypervisor, and provides
 a TLS driver for VNC connections using the VeNCrypt RFB auth method.
 
 The two patches [2] have sat in place with one +2 (Dan Berrange) and multiple 
 +1s
 for a while now (the first does not currently show any votes because of a 
 merge
 conflict that I had to deal with recently).

I'll sponsor this one obviously since I have already +2'd it.  I'd really
like to see this feature merged for Kilo, since we've already delayed it
from merge in Juno due to lack of reviewer attention, and it'd be giving
a pretty poor impression to delay it another 6 months for the same reason

The feature itself does touch codepaths which impact existing code, but
I think the risk is low because the feature is opt-in, so out of the box
users will still be using the existing proven stable codepaths.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread Thierry Carrez
Joe, Matt  Matthew:

I hear your frustration with broken stable branches. With my
vulnerability management team member hat, responsible for landing
patches there with a strict deadline, I can certainly relate with the
frustration of having to dive in to unbork the branch in the first
place, rather than concentrate on the work you initially planned on doing.

That said, wearing my stable team member hat, I think it's a bit unfair
to say that things are worse than they were and call for dramatic
action. The stable branch team put a structure in place to try to
continuously fix the stable branches rather than reactively fix it when
we need it to work. Those champions have been quite active[1] unbreaking
it in the past months. I'd argue that the branch is broken much less
often than it used to. That doesn't mean it's never broken, though, or
that those people are magicians.

One issue in the current situation is that the two groups (you and the
stable maintainers) seem to work in parallel rather than collaborate.
It's quite telling that the two groups maintained separate etherpads to
keep track of the fixes that needed landing.

[1] https://etherpad.openstack.org/p/stable-tracker

Matthew Treinish wrote:
 So I think it's time we called the icehouse branch and marked it EOL. We
 originally conditioned the longer support window on extra people stepping
 forward to keep things working. I believe this latest issue is just the latest
 indication that this hasn't happened. Issue 1 listed above is being caused by
 the icehouse branch during upgrades. The fact that a stable release was pushed
 at the same time things were wedged on the juno branch is just the latest
 evidence to me that things aren't being maintained as they should be. Looking 
 at
 the #openstack-qa irc log from today or the etherpad about trying to sort this
 issue should be an indication that no one has stepped up to help with the
 maintenance and it shows given the poor state of the branch.

I disagree with the assessment. People have stepped up. I think the
stable branches are less often broken than they were, and stable branch
champions (as their tracking etherpad shows) have made a difference.
There just has been more issues as usual recently and they probably
couldn't keep track. It's not a fun job to babysit stable branches,
belittling the stable branch champions results is not the best way to
encourage them to continue in this position. I agree that they could
work more with the QA team when they get overwhelmed, and raise more red
flags when they just can't keep up.

I also disagree with the proposed solution. We announced a support
timeframe for Icehouse, our downstream users made plans around it, so we
should stick to it as much as we can. If we dropped stable branch
support every time a patch can't be landed there, there would just not
be any stable branch.

Joe Gordon wrote:
 Where is it documented that Adam is the Juno branch champion and Ihar is
 Icehouse's? I didn't see it anywhere in the wiki.

It was announced here:
http://lists.openstack.org/pipermail/openstack-dev/2014-November/050390.html

I agree it should be documented on the wiki.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception request for Add the Nova libvirt StorPool attachment driver.

2015-02-10 Thread Daniel P. Berrange
On Tue, Feb 10, 2015 at 12:52:38PM +0200, Peter Penchev wrote:
 On Tue, Feb 10, 2015 at 12:34 PM, Daniel P. Berrange
 berra...@redhat.com wrote:
  On Tue, Feb 10, 2015 at 12:28:27PM +0200, Peter Penchev wrote:
  Hi,
 
  I'd like to request a feature freeze exception for the change:
https://review.openstack.org/140733/
 
  It's a small, self-contained driver for attaching StorPool-backed
  Cinder volumes to existing Nova virtual machines.  It does not touch
  anything outside its own bailiwick (adds a new class to
  nova/virt/libvirt/volume.py) and is just a couple of lines of code,
  passing requests through to the JSON-over-HTTP StorPool API.
 
  The Cinder support for volumes created on a StorPool cluster was
  merged before Kilo-1, and IMVHO it would make sense to have a Nova
  attachment driver to actually use the Cinder support.
 
  The change was reviewed by Daniel Berrange, who posted a very
  reasonable suggestion for restructuring the code, which I implemented
  within a few days.  I would be very grateful if somebody could take a
  look at the proposed addition and consider it for a feature freeze
  exception.
 
  Thanks in advance for your time, and keep up the great work!
 
  The Feature Freeze Exception process is for approving the merge of
  changes whose specs/blueprints are already approved. Unfortunately,
  IIUC, your blueprint is not approved, so it is not possible to
  request a FFE for merging the changes in question.
 
  The deadline for requesting Spec/Blueprint Freeze Exceptions has
  passed quite a while ago now.
 
 Hi,
 
 Actually, the blueprint was approved quite some time ago, and it also
 has a favorable comment by John Garbutt from January 26th (two days
 after I submitted the updated change addressing your concerns).  Then
 on February 5th Thierry Carrez changed its status to Pending
 approval with a comment stating that it missed the non-priority
 feature freeze deadline (I think that the johnthetubaguy attribution
 at the end of the comment is more of a copy/paste error, since the
 change indeed seems to have been done by Thierry Carrez).

Ok, will need John G to clarify then. I was judging from the fact that
the status is Pending approval and the most recent comment says
Sorry, we have now hit the non-priority feature freeze for kilo. Please
 resubmit your spec for the L release

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] A question about strange behavior of oslo.config in eclipse

2015-02-10 Thread Joshua Zhang
Hi Stacker,
   A question about oslo.config, maybe a very silly question. but pls tell
me if you know, thanks in advance.

   I know oslo has removed 'olso' namespace, oslo.config has been changed
to oslo_config, it also retains backwards compat.

   I found I can run openstack successfully, but as long as I run something
in eclipse/pydev it always said like 'NoSuchOptError: no such option:
policy_file'. I can change 'oslo.config' to 'oslo_config' in
neutron/openstack/common/policy.py temporarily to bypass this problem when
I want to debug something in eclipse. But I want to know why? who can help
me to explain ? thanks.


-- 
Best Regards
Zhang Hua(张华)
Software Engineer | Canonical
IRC:  zhhuabj
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-10 Thread Wilence Yao
Hi all,
  After OpenStack Juno, floating ip is handled by dvr, but SNAT is still
handled by l3agent on network node. The distributed SNAT is in future plans
for DVR. In my opinion, SNAT can move to DVR as well as floating ip. I have
searched in blueprint, there is little  about distributed SNAT. Is there
any different between distributed floating ip and distributed SNAT?

Thanks for any Suggestions.

Wilence Yao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception request

2015-02-10 Thread Daniel P. Berrange
On Tue, Feb 10, 2015 at 11:55:54AM +0200, Eduard Matei wrote:
 Hi,
 
 I would like to request a FFE for
 https://review.openstack.org/#/c/134134/
 
 It is a change that allows a driver to use type file instead of block
 for local volumes.
 It was blocked because at that time there was no use for it.
 Now, we have a driver merged which currently returns loca for
 driver_volume_type; we would like to return driver_volume_type: file
 (which is the correct type) which will then result (via this change) in a
 disk type = file in the xml being generated.
 
 Also, this changeset doesn't remove any code, doesn't overwrite any code,
 only adds a new type of LibvirtVolumeDriver.

I'll sponsor this on the basis that the code is self-contained and so no
risk to existing libvirt features, and more importantly the cinder side
has been merged for Kilo, so it would be a poor message to users if we
did not merge the corresponding Nova half of the feature.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request for Quiesce boot from volume instances

2015-02-10 Thread Daniel P. Berrange
On Fri, Feb 06, 2015 at 10:20:04PM +, Tomoki Sekiyama wrote:
 Hello,
 
 I'd like to request a feature freeze exception for the change
   https://review.openstack.org/#/c/138795/ .
 
 This patch makes live volume-boot instance snapshots consistent by
 quiescing instances before snapshotting. Quiescing for image-boot
 instances are already merged in the libvirt driver, and this is a
 complementary part for volume-boot instances.
 
 
 Nikola Dipanov and Daniel Berrange actively reviewed the patch and I hope
 it is ready now (+1 from Nikola with a comment that he is waiting for the
 FFE process at this point so no +2s).
 Please consider approving this FFE.

I'm happy to sponsor this one having given it multiple reviews

You could probably even argue this feature is in fact a bug fix since
it fixes the problem of inconsistent snapshots which can result in guest
application data corruption in the worst case.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request (Use libvirt storage pools)

2015-02-10 Thread Daniel P. Berrange
On Fri, Feb 06, 2015 at 05:29:33PM -0500, Solly Ross wrote:
 Hi,
 
 I would like to request a non-priority feature freeze exception for the 
 Use libvirt storage pools blueprint [1].
 
 The blueprint introduces a new image backed type that uses libvirt storage 
 pools,
 and is designed to supercede several of the existing image backends for Nova.
 Using libvirt storage pools simplifies both the maintenance of existing code
 and the introduction of future storage pool types (since we can support
 any libvirt storage pool format that supports the createXMLFrom API call).
 It also paves the way for potentially using the storage pool API to assist
 with SSH-less migration of disks (not part of this blueprint).
 The blueprint also provides a way to migrate disks using legacy backends
 to the new backend on cold migrations/resizes, reboots (soft and hard),
 and live block migrations.
 
 The code [2] is up and working, and is split into (hopefully) manageable 
 chunks.
 
 Best Regards,
 Solly Ross
 
 [1] 
 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html
 [2] https://review.openstack.org/#/c/152348/ and onward
 
 P.S. I would really like to get this in, since this would be the second time 
 that
 this has been deferred, and took a good bit of manual rebasing to create the 
 Kilo
 version from the Juno version.

Much as I'd like to see this feature in Nova, my recommendation is to
reject this FFE. The image cache management code in particular has been
a long term source of bugs and unexpected behaviour. While I think this
series improves the code in question, the potential for causing regressions
is none the less very high.

An overhall of this area of code is really something that needs to get
merged very early in a dev cycle (ie Kilo-1, or the start of Kilo-2)
in order to allow enough opportunity to stablise it.

The first posting for Kilo was only done on Jan 21, before that the
previous update was Sept 7. I'm afraid this work was just far too
late in Kilo to stand a reasonable chance of merging in Kilo given
how complex the code in this area is.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request for Quiesce boot from volume instances

2015-02-10 Thread Nikola Đipanov
On 02/10/2015 11:12 AM, Daniel P. Berrange wrote:
 On Fri, Feb 06, 2015 at 10:20:04PM +, Tomoki Sekiyama wrote:
 Hello,

 I'd like to request a feature freeze exception for the change
   https://review.openstack.org/#/c/138795/ .

 This patch makes live volume-boot instance snapshots consistent by
 quiescing instances before snapshotting. Quiescing for image-boot
 instances are already merged in the libvirt driver, and this is a
 complementary part for volume-boot instances.


 Nikola Dipanov and Daniel Berrange actively reviewed the patch and I hope
 it is ready now (+1 from Nikola with a comment that he is waiting for the
 FFE process at this point so no +2s).
 Please consider approving this FFE.
 
 I'm happy to sponsor this one having given it multiple reviews
 
 You could probably even argue this feature is in fact a bug fix since
 it fixes the problem of inconsistent snapshots which can result in guest
 application data corruption in the worst case.
 

I will sponsor it too - it basically one patch that is ready to merge
and has had a number of reviews.

N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Question about the plan of L

2015-02-10 Thread liuxinguo
Hi,

In Kilo the cinder driver is requested to be merged before K-1, I want to ask 
that in L does the driver will be requested to be merged before L-1?

Thanks and regards,
Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project meeting, Tue February 10th, 21:00 UTC

2015-02-10 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting today at 21:00 UTC, with the
following agenda:

* Status update on novanet2neutron
* API_Working_Group[1] update (etoews)
* EOL stable/icehouse [2]
* openstack-specs discussion
  * CLI Sorting Argument Guidelines [3] -- ready to move to TC
rubberstamping ?
  * Add TRACE definition to log guidelines [4] -- final draft ?
  * Cross-Project spec to eliminate SQL Schema Downgrades [5]
  * Testing guidelines [6]
* Open discussion  announcements

[1] https://wiki.openstack.org/wiki/API_Working_Group
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-February/056366.html
[3] https://review.openstack.org/#/c/145544/
[4] https://review.openstack.org/#/c/145245/
[5] https://review.openstack.org/#/c/152337/
[6] https://review.openstack.org/#/c/150653/

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception request

2015-02-10 Thread Eduard Matei
Hi,

I would like to request a FFE for
https://review.openstack.org/#/c/134134/

It is a change that allows a driver to use type file instead of block
for local volumes.
It was blocked because at that time there was no use for it.
Now, we have a driver merged which currently returns loca for
driver_volume_type; we would like to return driver_volume_type: file
(which is the correct type) which will then result (via this change) in a
disk type = file in the xml being generated.

Also, this changeset doesn't remove any code, doesn't overwrite any code,
only adds a new type of LibvirtVolumeDriver.

Thanks,

Eduard
-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception request for Add the Nova libvirt StorPool attachment driver.

2015-02-10 Thread Peter Penchev
Hi,

I'd like to request a feature freeze exception for the change:
  https://review.openstack.org/140733/

It's a small, self-contained driver for attaching StorPool-backed
Cinder volumes to existing Nova virtual machines.  It does not touch
anything outside its own bailiwick (adds a new class to
nova/virt/libvirt/volume.py) and is just a couple of lines of code,
passing requests through to the JSON-over-HTTP StorPool API.

The Cinder support for volumes created on a StorPool cluster was
merged before Kilo-1, and IMVHO it would make sense to have a Nova
attachment driver to actually use the Cinder support.

The change was reviewed by Daniel Berrange, who posted a very
reasonable suggestion for restructuring the code, which I implemented
within a few days.  I would be very grateful if somebody could take a
look at the proposed addition and consider it for a feature freeze
exception.

Thanks in advance for your time, and keep up the great work!

G'luck,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request - bp/libvirt-kvm-systemz

2015-02-10 Thread Daniel P. Berrange
On Mon, Feb 09, 2015 at 05:15:26PM +0100, Andreas Maier wrote:
 
 Hello,
 I would like to ask for the following feature freeze exceptions in Nova.
 
 The patch sets below are all part of this blueprint:
 https://review.openstack.org/#/q/status:open+project:openstack/nova
 +branch:master+topic:bp/libvirt-kvm-systemz,n,z
 and affect only the kvm/libvirt driver of Nova.
 
 The decision for merging these patch sets by exception can be made one by
 one; they are independent of each other.
 
 1. https://review.openstack.org/149242 - FCP support
 
Title: libvirt: Adjust Nova to support FCP on System z systems
 
What it does: This patch set enables FCP support for KVM on System z.
 
Impact if we don't get this: FCP attached storage does not work for KVM
on System z.
 
Why we need it: We really depend on this particular patch set, because
FCP is our most important storage attachment.
 
Additional notes: The code in the libvirt driver that is updated by this
patch set is consistent with corresponding code in the Cinder driver,
and it has seen review by the Cinder team.
 
 2. https://review.openstack.org/150505 - Console support
 
Title: libvirt: Enable serial_console feature for system z
 
What it does: This patch set enables the backing support in Nova for the
interactive console in Horizon.
 
Impact if we don't get this: Console in Horizon does not work. The
mitigation for a user would be to use the Log in Horizon (i.e. with
serial_console disabled), or the virsh console command in an ssh
session to the host Linux.
 
Why we need it: We'd like to have console support. Also, because the
Nova support for the Log in Horizon has been merged in an earlier patch
set as part of this blueprint, this remaining patch set makes the
console/log support consistent for KVM on System z Linux.
 
 3. https://review.openstack.org/150497 - ISO/CDROM support
 
Title: libvirt: Set SCSI as the default cdrom bus on System z
 
What it does: This patch set enables that cdrom drives can be attached
to an instance on KVM on System z. This is needed for example for
cloud-init config files, but also for simply attaching ISO images to
instances. The technical reason for this change is that the IDE
attachment is not available on System z, and we need SCSI (just like
Power Linux).
 
Impact if we don't get this:
   - Cloud-init config files cannot be on a cdrom drive. A mitigation
  for a user would be to have such config files on a cloud-init
  server.
   - ISO images cannot be attached to instances. There is no mitigation.
 
Why we need it: We would like to avoid having to restrict cloud-init
configuration to just using cloud-init servers. We would like to be able
to support ISO images.
 
Additional notes: This patch is a one line change (it simply extends
what is already done in a platform specific case for the Power platform,
to be also used for System z).

I will happily sponsor exception on patches 2  3, since they are pretty
trivial  easily understood.


I will tenatively sponsor patch 1, if other reviewers feel able to do a
strong review of the SCSI stuff, since this is SCSI host setup is not
something I'm particularly familiar with.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception request for Add the Nova libvirt StorPool attachment driver.

2015-02-10 Thread Daniel P. Berrange
On Tue, Feb 10, 2015 at 12:28:27PM +0200, Peter Penchev wrote:
 Hi,
 
 I'd like to request a feature freeze exception for the change:
   https://review.openstack.org/140733/
 
 It's a small, self-contained driver for attaching StorPool-backed
 Cinder volumes to existing Nova virtual machines.  It does not touch
 anything outside its own bailiwick (adds a new class to
 nova/virt/libvirt/volume.py) and is just a couple of lines of code,
 passing requests through to the JSON-over-HTTP StorPool API.
 
 The Cinder support for volumes created on a StorPool cluster was
 merged before Kilo-1, and IMVHO it would make sense to have a Nova
 attachment driver to actually use the Cinder support.
 
 The change was reviewed by Daniel Berrange, who posted a very
 reasonable suggestion for restructuring the code, which I implemented
 within a few days.  I would be very grateful if somebody could take a
 look at the proposed addition and consider it for a feature freeze
 exception.
 
 Thanks in advance for your time, and keep up the great work!

The Feature Freeze Exception process is for approving the merge of
changes whose specs/blueprints are already approved. Unfortunately,
IIUC, your blueprint is not approved, so it is not possible to
request a FFE for merging the changes in question.

The deadline for requesting Spec/Blueprint Freeze Exceptions has
passed quite a while ago now.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-10 Thread Attila Fazekas




- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Monday, February 9, 2015 9:36:45 PM
 Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody 
 should know about Galera
 
 On 02/09/2015 03:10 PM, Clint Byrum wrote:
  Excerpts from Jay Pipes's message of 2015-02-09 10:15:10 -0800:
  On 02/09/2015 01:02 PM, Attila Fazekas wrote:
  I do not see why not to use `FOR UPDATE` even with multi-writer or
  Is the retry/swap way really solves anything here.
  snip
  Am I missed something ?
 
  Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
  that are needed to support SELECT FOR UPDATE statements across multiple
  cluster nodes.
 
  https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ
 
  Attila acknowledged that. What Attila was saying was that by using it
  with Galera, the box that is doing the FOR UPDATE locks will simply fail
  upon commit because a conflicting commit has already happened and arrived
  from the node that accepted the write. Further what Attila is saying is
  that this means there is not such an obvious advantage to the CAS method,
  since the rollback and the # updated rows == 0 are effectively equivalent
  at this point, seeing as the prior commit has already arrived and thus
  will not need to wait to fail certification and be rolled back.
 
 No, that is not correct. In the case of the CAS technique, the frequency
 of rollbacks due to certification failure is demonstrably less than when
 using SELECT FOR UPDATE and relying on the certification timeout error
 to signal a deadlock.
 
  I am not entirely certain that is true though, as I think what will
  happen in sequential order is:
 
  writer1: UPDATE books SET genre = 'Scifi' WHERE genre = 'sciencefiction';
  writer1: -- send in-progress update to cluster
  writer2: SELECT FOR UPDATE books WHERE id=3;
  writer1: COMMIT
  writer1: -- try to certify commit in cluster
  ** Here is where I stop knowing for sure what happens **
  writer2: certifies writer1's transaction or blocks?
 
 It will certify writer1's transaction. It will only block another thread
 hitting writer2 requesting write locks or write-intent read locks on the
 same records.
 
  writer2: UPDATE books SET genre = 'sciencefiction' WHERE id=3;
  writer2: COMMIT -- One of them is rolled back.
 

The other transaction can be rolled back before you do an actual commit:
writer1: BEGIN
writer2: BEGIN
writer1: update test set val=42 where id=1;
writer2: update test set val=42 where id=1;
writer1: COMMIT
writer2: show variables;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting 
transaction

As you can see 2th transaction failed without issuing a COMMIT after the 1th 
one committed.
You could write anything to mysql on writer2 at this point,
 even invalid things returns with `Deadlock`.

  So, at that point where I'm not sure (please some Galera expert tell
  me):
 
  If what happens is as I suggest, writer1's transaction is certified,
  then that just means the lock sticks around blocking stuff on writer2,
  but that the data is updated and it is certain that writer2's commit will
  be rolled back. However, if it blocks waiting on the lock to resolve,
  then I'm at a loss to determine which transaction would be rolled back,
  but I am thinking that it makes sense that the transaction from writer2
  would be rolled back, because the commit is later.
 
 That is correct. writer2's transaction would be rolled back. The
 difference is that the CAS method would NOT trigger a ROLLBACK. It would
 instead return 0 rows affected, because the UPDATE statement would
 instead look like this:
 
 UPDATE books SET genre = 'sciencefiction' WHERE id = 3 AND genre = 'SciFi';
 
 And the return of 0 rows affected would trigger a simple retry of the
 read and then update attempt on writer2 instead of dealing with ROLLBACK
 semantics on the transaction.
 
 Note that in the CAS method, the SELECT statement and the UPDATE are in
 completely different transactions. This is a very important thing to
 keep in mind.
 
  All this to say that usually the reason for SELECT FOR UPDATE is not
  to only do an update (the transactional semantics handle that), but
  also to prevent the old row from being seen again, which, as Jay says,
  it cannot do.  So I believe you are both correct:
 
  * Attila, yes I think you're right that CAS is not any more efficient
  at replacing SELECT FOR UPDATE from a blocking standpoint.
 
 It is more efficient because there are far fewer ROLLBACKs of
 transactions occurring in the system.
 
 If you look at a slow query log (with a 0 slow query time) for a MySQL
 Galera server in a multi-write cluster during a run of Tempest or Rally,
 you will notice that the number of ROLLBACK statements is extraordinary.
 AFAICR, when Peter Boros and I benchmarked a Rally launch and delete 10K
 VM run, we saw nearly 11% of *total* queries executed 

Re: [openstack-dev] [nova] Feature Freeze Exception request for Add the Nova libvirt StorPool attachment driver.

2015-02-10 Thread Peter Penchev
On Tue, Feb 10, 2015 at 12:34 PM, Daniel P. Berrange
berra...@redhat.com wrote:
 On Tue, Feb 10, 2015 at 12:28:27PM +0200, Peter Penchev wrote:
 Hi,

 I'd like to request a feature freeze exception for the change:
   https://review.openstack.org/140733/

 It's a small, self-contained driver for attaching StorPool-backed
 Cinder volumes to existing Nova virtual machines.  It does not touch
 anything outside its own bailiwick (adds a new class to
 nova/virt/libvirt/volume.py) and is just a couple of lines of code,
 passing requests through to the JSON-over-HTTP StorPool API.

 The Cinder support for volumes created on a StorPool cluster was
 merged before Kilo-1, and IMVHO it would make sense to have a Nova
 attachment driver to actually use the Cinder support.

 The change was reviewed by Daniel Berrange, who posted a very
 reasonable suggestion for restructuring the code, which I implemented
 within a few days.  I would be very grateful if somebody could take a
 look at the proposed addition and consider it for a feature freeze
 exception.

 Thanks in advance for your time, and keep up the great work!

 The Feature Freeze Exception process is for approving the merge of
 changes whose specs/blueprints are already approved. Unfortunately,
 IIUC, your blueprint is not approved, so it is not possible to
 request a FFE for merging the changes in question.

 The deadline for requesting Spec/Blueprint Freeze Exceptions has
 passed quite a while ago now.

Hi,

Actually, the blueprint was approved quite some time ago, and it also
has a favorable comment by John Garbutt from January 26th (two days
after I submitted the updated change addressing your concerns).  Then
on February 5th Thierry Carrez changed its status to Pending
approval with a comment stating that it missed the non-priority
feature freeze deadline (I think that the johnthetubaguy attribution
at the end of the comment is more of a copy/paste error, since the
change indeed seems to have been done by Thierry Carrez).

G'luck,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Repeating stack-delete many times

2015-02-10 Thread Kairat Kushaev
Hi all,
During the analysis of the following bug:
https://bugs.launchpad.net/heat/+bug/1418878
i figured out that orchestration engine doesn't work properly in some cases.
The case is the following:
trying to delete the same stack with resources n times in series.
It might happen if the stack deleting takes much time and a user is sending
the second delete request again.
Orchestration engine behavior is the following:
1) When first stack-delete command comes to heat service
it acquires the stack lock and sends delete request for resources
to other clients.
Unfortunately, the command does not start to delete resources from heat db.
2) At that time second stack-delete command for the same stack
comes to heat engine. It steals the stack lock, waits 0.2 (hard-coded
constant!)
sec to allow previous stack-delete command finish the operations (of
course,
the first didn't manage to finish deleting on time). After that engine
service starts
the deleting again:
 - Request resources from heat DB (They exist!)
 - Send requests for delete to other clients (They do not exist because
of
point 1).
Finally, we have stack in DELETE_FAILED state because the clients raise
exceptions during stack delete.
I have some proposals how to fix it:
p1) Make waiting time (0.2 sec) configurable. It allows to finish
stack-delete ops
before the second command starts deleting. From my point of view, it is
just
workaround because different stacks (and operations) took different time.
p2) Try to deny lock stealing if the current thread executes deleting. As
an option,
we can wait for the other thread if stack is deleting but it seems that it
is not possible
to analyze with the current solution.
p3) Just leave it as it is. IMO, the last solution.
Do you have any other proposals how to manage such kind of cases?
Perhaps there is exists more proper solution.

Thank You,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Additional user account in the OpenStack for fetching OpenStack workloads

2015-02-10 Thread Alexander Kislitsky
Folks,

We are collecting OpenStack workloads stats. For authentication in the
keystone we are using admin user credentials from Nailgun. Credentials can
be changed directly in the OpenStack and we will loose possibility of
fetching information.

This issue can be fixed by creation additional user account:

   1. I propose to generate additional user credentials after master node
   is installed and store it into master_node_settings table in the Nailgun.
   2. Add abstraction layer into
   
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/statistics/utils.py#L47
   for creating additional user in the OpenStack if it isn't exists.

But this additional user can be useful for other purposes and may be we
should save credentials in other place (settings.yaml for example). And may
be creation of the additional user should be implemented outside of stats
collecting feature and may be outside of Nailgun.

Please share your thoughts on this.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] two level share scheduling

2015-02-10 Thread Valeriy Ponomaryov
Hello Jason,

On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop jason.bis...@gmail.com
wrote:

 When a share is created (from scratch), the manila scheduler identifies a
 share server from its list of backends and makes an api call to
 create_share method in the appropriate driver.  The driver executes the
 required steps and returns the export_location which is then written to the
 database.

It is not correct description of current approach. Scheduler handles only
capabilities and extra specs, and there is no logic for filtering based on
share servers for the moment.
Correct would be following:
Scheduler (manila-scheduler) chooses host, then sends request create
share to chosen manila-share service, which handles all stuff related to
share servers based on share driver logic.

On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop jason.bis...@gmail.com
 wrote:

 Proposed scheme:

 The proposal is simple in concept.  Instead of the driver
 (GenericShareDriver for example) returning both share server ip address and
 path in share export_location, only the path is returned and saved in the
 database.  The binding of the share server ip address is only determined at
 share mount time.  In practical terms this means share server is determined
 by an api call to the driver when _get_shares is called.  The driver would
 then have the option of determining which IP address from its basket of
 addresses to return.  In this way, each share mount event presents an
 opportunity for the NFS traffic to be balanced over all available network
 endpoints.

It is specific only to GenericShareDriver and mentioned public IP address
is used once for combining export_location from path and this IP. Other
share drivers do not store it and not forced to do it at all. For example,
in case of share driver for NetApp Clustered Data OnTap stored only one
specific information, it is name of vserver. IP address is taken each time
via API of backend.

It is true, that now we have possibility to store only one export location.
I agree, that it would be suitable to have more than one export_location.
So, idea of having multiple export_locations is good.


On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop jason.bis...@gmail.com
 wrote:

 I see following cons:

o slow manila list performance

o very slow manila list performance if all share drivers are busy doing
 long operations such as create/delete share

First of all, manila-api service does know nothing about share drivers or
backends, that are meanings of different service/process - manila-share,
manila-api uses DB for getting information.
So, you just can not ask share drivers with list API call. API either
reads DB and returns something or sends some RPC and returns some data and
does not expect result of RPC.
If you want to change IP addresses, then you need to update DB with it.
Hence, it looks like requirement to have periodic task, that will do it
continuously.

I prefer to have more than one export locations and allow users to chose
any of them. Also I assume possibility when IP addresses just changed, in
this case we should allow to update export locations.

And second, if we implement multiple export locations for shares, better to
not return it within list API response and do it only within get
requests.

Valeriy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-10 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

my name was called out here, so I think I need to present my thoughts
on the matter. :) And sorry for writing too many words below.

===

As a disclaimer, I was not the one to support long support term for
stable branches. On the previous summit, I spoke up on sessions about
stable branches to avoid raising the bar even more, and even for
lowering it. One of my arguments against long stable branch support
term is that with 6 month release cycle in mind, we end up with 3, 4,
5, ... stable branches to support in parallel, and it's just too much
to take for anyone. Unfortunately, the session decided to keep 15
months support cycle for Juno.

I really believe we should support one stable release for the most of
our time, leaving ~1 month overlap right after the latest major
release (probably till the first minor .1 update).

That being said, I don't support the way it's proposed here to just
drop a stable branch due to some intermittent issues with it, instead
of fixing it, and the process around it. We made a public statement on
support term, and it would be irresponsible to break it.

===

I also disagree that stable-maint team does not step in to fixing the
issues. There are people who are actively tracking the branches and
doing appropriate fixes here and there. I'm sorry to hear that some of
you still need to step in from time to time to fix stable branches,
but I want to assure you you're not alone in suffering through it.

There were changes in how the team works recently:
- - we actually care about periodic jobs that report issues in project
trees on daily basis;
- - we have a common etherpad to track current stable branch state;
- - we have champions assigned to each of stable branches;
- - we have a generally better fix cycle for issues that arise in stable
branches.

We still have issues to solve, for example the way we handle stable
branch freezes seems to be not optional. We should also be more
proactive to cap library versions right after new release (I'll try to
follow up on that after the current issues are solved). I think we
should also reduce support cycle, especially since there seems to be
little interest from d/s consumers to work collaboratively on those
branches.

===

Part of the problem with stable branches is that we still run them
against bleeding edge. It's true for both external libraries (recent
eventlet, boto issues) and our own projects (clients, oslo libraries,
tempest). This should not be the case, and I'm glad there are people
who are working on pinning pypi libraries on stable branches. Oslo
project now also understand that they should maintain stable branches
for all their libraries (it was not generally the case half a year ago).

I know that sometimes we try to attempt branchlesness, for it's easier
to live without backports, and with due care, it usually works. But it
still fails sometimes, because in reality our backwards compatibility
claims are more of 'best effort' kind, and people do mistakes.

So I hope that once the current issues are solved, we move forward
with pinning all the version for pypi libraries.

I'm also open to help with the effort, though I need to admit that I'm
not involved in current 'qa' program, and so I have no deep
understanding on how it all actually works.

===

I see several problems shown by the gate issues. First, we obviously
lack on communication between 'qa' and 'stable-maint' people. F.e. QA
people were not aware of stable-tracker etherpad maintained by
stable-maint team; while I personally was not aware of who is working
on unwedging the gate, or even whether the issue was critical or even
existed (the first report I got from periodic-checks that showed the
issues comes from today at the night, my local time).

I've updated stable-branch wiki page with contacts of stable branch
champions, so that next time you know whom to reach.

I'd also like to say that you can always reach me for any stable
branch related issues no matter whether it's Icehouse or Juno. The
quicker we get a ping from affected parties, the quicker we start
looking into it.

I'll add #openstack-qa channel into my auto-connect list, I hope it
will help to raise awareness about gate issues on my side, and about
stable process on yours. You're also invited to #openstack-stable were
people show up (sometimes).

===

I would also like to note that stable-maint team lacks a lot of ACLs
to actually resolve any of those issues. Specifically,

- - we don't have +2 for devstack, so we can't merge several patches
that are ready to be pushed to unwedge the branch, specifically:
- - https://review.openstack.org/#/c/154252/
- - https://review.openstack.org/#/c/154217/;

- - we don't have +2 for tempest since it's branch-less, and we are not
cores there;

- - we don't have +2 at requirements/master that sometimes requires
ninja merges in case a gate failure is introduced by a new external
library release.

What we currently 

Re: [openstack-dev] [heat] Repeating stack-delete many times

2015-02-10 Thread Kairat Kushaev
Sorry for flood,
i forgot p4:
Prohibit stack deletion if the current stack state, status = (DELETE, IN
PROGRESS).
Raise not supported exception in heat engine. It is possible because stack
state
will be updated before deleting.

On Tue, Feb 10, 2015 at 2:04 PM, Kairat Kushaev kkush...@mirantis.com
wrote:

 Hi all,
 During the analysis of the following bug:
 https://bugs.launchpad.net/heat/+bug/1418878
 i figured out that orchestration engine doesn't work properly in some
 cases.
 The case is the following:
 trying to delete the same stack with resources n times in series.
 It might happen if the stack deleting takes much time and a user is sending
 the second delete request again.
 Orchestration engine behavior is the following:
 1) When first stack-delete command comes to heat service
 it acquires the stack lock and sends delete request for resources
 to other clients.
 Unfortunately, the command does not start to delete resources from heat
 db.
 2) At that time second stack-delete command for the same stack
 comes to heat engine. It steals the stack lock, waits 0.2 (hard-coded
 constant!)
 sec to allow previous stack-delete command finish the operations (of
 course,
 the first didn't manage to finish deleting on time). After that engine
 service starts
 the deleting again:
  - Request resources from heat DB (They exist!)
  - Send requests for delete to other clients (They do not exist
 because of
 point 1).
 Finally, we have stack in DELETE_FAILED state because the clients raise
 exceptions during stack delete.
 I have some proposals how to fix it:
 p1) Make waiting time (0.2 sec) configurable. It allows to finish
 stack-delete ops
 before the second command starts deleting. From my point of view, it is
 just
 workaround because different stacks (and operations) took different time.
 p2) Try to deny lock stealing if the current thread executes deleting. As
 an option,
 we can wait for the other thread if stack is deleting but it seems that it
 is not possible
 to analyze with the current solution.
 p3) Just leave it as it is. IMO, the last solution.
 Do you have any other proposals how to manage such kind of cases?
 Perhaps there is exists more proper solution.

 Thank You,
 Kairat Kushaev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Fuel-Library] Manifests and tasks for granular deployment

2015-02-10 Thread Aleksandr Didenko
Hi,
General info
Detailed documentation is available in the latest spec [1] (see [2] for
HTML format). You can also check blue-print [3]  and etherpad how-to [4]
for more info.

Fuel-library granularization status
As you may know, we're using granular task based deployment in master
branch already. Currently we have a set of separate manifests that are
applied by puppet as separate tasks, they are stored here [5].

According to our implementation plan (step #2, see spec for details) we
moved node roles (controller, compute, cinder, ceph-osd, mongo,
zabbix-server) into separate tasks. So every role is deployed by its own
separate puppet manifest applied as Fuel task.

We're also going to remove the following files shortly as we no longer
use/need them:

deployment/puppet/osnailyfacter/examples/site.pp
deployment/puppet/osnailyfacter/manifests/cluster_ha.pp
deployment/puppet/osnailyfacter/manifests/cluster_simple.pp
deployment/puppet/osnailyfacter/modular/legacy.pp

So here are few examples, just to illustrate current deployment process:

1) Controller (primary and non-primary) role:
puppet apply /etc/puppet/modules/osnailyfacter/modular/hiera.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/globals.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/logging.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/netconfig.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/firewall.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/hosts.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/tools.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/controller.pp

2) Compute role:
puppet apply /etc/puppet/modules/osnailyfacter/modular/hiera.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/globals.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/logging.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/netconfig.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/firewall.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/hosts.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/tools.pp
puppet apply /etc/puppet/modules/osnailyfacter/modular/compute.pp

Hiera
Also we're switching from parseyaml() function and global $::fuel_settings
hash to Hiera. All the configuration data should be pulled into manifests
via hiera() only.  Hiera is configured by the first two tasks:

/etc/puppet/modules/osnailyfacter/modular/hiera.pp - configures Hiera
/etc/puppet/modules/osnailyfacter/modular/globals.pp - creates
/etc/hiera/globals.yaml file with node-specific calculated data that should
help to avoid code duplication in manifests

For example: globals.yaml contains *internal_address* for the node and you
can pull it via hiera:

root@node-1:~# hiera internal_address
192.168.0.3

Without globals.yaml you would have to search for internal_address in nodes
hash in astute.yaml. You can see current modular manifests for more
examples.

CI updates
We also have some changes in fuel-library CI related to granular
deployment. Since our deployment process depends on tasks, which are
shipped with fuel-library repo, we've added new CI test job [6] that runs
basic schema validation and makes sure we have acyclic tasks graph.


[1] https://review.openstack.org/147591
[2]
http://docs-draft.openstack.org/91/147591/13/check/gate-fuel-specs-docs/e58793b//doc/build/html/specs/6.1/fuel-library-modularization.html
[3] https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
[4] https://etherpad.openstack.org/p/fuel-library-modularization
[5]
https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/osnailyfacter/modular
[6] https://fuel-jenkins.mirantis.com/job/fuellib_tasks_graph_check

--
Regards,
Aleksandr Didenko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-10 Thread Attila Fazekas




- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: Attila Fazekas afaze...@redhat.com, OpenStack Development Mailing 
 List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: Pavel Kholkin pkhol...@mirantis.com
 Sent: Monday, February 9, 2015 7:15:10 PM
 Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody 
 should know about Galera
 
 On 02/09/2015 01:02 PM, Attila Fazekas wrote:
  I do not see why not to use `FOR UPDATE` even with multi-writer or
  Is the retry/swap way really solves anything here.
 snip
  Am I missed something ?
 
 Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
 that are needed to support SELECT FOR UPDATE statements across multiple
 cluster nodes.
 

Galere does not replicates the row-level locks created by UPDATE/INSERT ...
So what to do with the UPDATE ?

Why should I handle the FOR UPDATE differently ?

 https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ
 
 Best,
 -jay
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Finding people to work on the EC2 API in Nova (M Ranga Swami Reddy)

2015-02-10 Thread Jayachandragupta Kasiviswa
Hi Swami,

Ive some exposure to Amazon API for S3, I'm willing to work on EC2 API.
Pls let me know how to contribute in to this.

-Viswanath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Finding people to work on the EC2 API in Nova (M Ranga Swami Reddy)

2015-02-10 Thread M Ranga Swami Reddy
Thank you very much for joining with us in EC2 API sub team. Will
share the To Do details soon.

Thanks
Swami

On Tue, Feb 10, 2015 at 6:57 PM, Jayachandragupta Kasiviswa
wingsoffire14...@gmail.com wrote:
 Hi Swami,

 Ive some exposure to Amazon API for S3, I'm willing to work on EC2 API.
 Pls let me know how to contribute in to this.

 -Viswanath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Repeating stack-delete many times

2015-02-10 Thread Kairat Kushaev
Thanks for explanation, Steven.
Will try to figure out why it is not working in nova.

On Tue, Feb 10, 2015 at 4:04 PM, Steven Hardy sha...@redhat.com wrote:

 On Tue, Feb 10, 2015 at 03:04:39PM +0400, Kairat Kushaev wrote:
 Hi all,
 During the analysis of the following bug:
 https://bugs.launchpad.net/heat/+bug/1418878
 i figured out that orchestration engine doesn't work properly in some
 cases.
 The case is the following:A
 trying to delete the same stack with resources n times in series.
 It might happen if the stack deleting takes much time and a user is
 sending
 the second delete request again.
 Orchestration engine behavior is the following:
 1) When first stack-delete command comes to heat service
 it acquires the stack lock and sends delete request for resources
 to other clients.
 Unfortunately, the command does not start to delete resources from
 heat
 db.A
 2) At that time second stack-delete command for the same stack
 comes to heat engine. It steals the stack lock, waits 0.2 (hard-coded
 constant!)A
 sec to allow previous stack-delete command finish the operations (of
 course,A
 the first didn't manage to finish deleting on time). After that engine
 service startsA
 the deleting again:
 A  A  A - Request resources from heat DB (They exist!)
 A  A  A - Send requests for delete to other clients (They do not exist
 because ofA
 A  A  A  A  point 1).

 This is expected, and the reason for the following error path in most
 resource handle_delete paths is to ignore any do not exist errors:

   self.client_plugin().ignore_not_found(e)

 Finally, we have stack in DELETE_FAILED state because the clients
 raise
 exceptions during stack delete.

 This is the bug, the exception which is raised isn't getting ignored by the
 nova client plugin, which by default only ignores NotFound exceptions:


 https://github.com/openstack/heat/blob/master/heat/engine/clients/os/nova.py#L85

 In this case, I think the problem is you're getting a Conflict exception
 when attempting to re-delete the NovaFloatingIpAssociation:


 https://github.com/openstack/heat/blob/master/heat/engine/resources/nova_floatingip.py#L148

 So, I think this is probably a bug specific to NovaFloatingIpAssociation
 rather than a problem we need to fix accross all resources?

 I'd probably suggest we either add another except clause which catches (and
 ignores) this situation, or look at if novaclient is raising the wrong
 exception type, as NotFound would appear to be a saner error than
 Conflict when trying to delete a non-existent association?

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] EC2 API Sub Team Weekly Meeting

2015-02-10 Thread Swami Reddy (via Doodle)
Hi there,

Swami Reddy (swamire...@gmail.com) invites you to participate in the
Doodle poll EC2 API Sub Team Weekly Meeting.

Participate now
https://doodle.com/8a6fnv6yxp7tgiw9?tmail=poll_invitecontact_participant_invitationtlink=pollbtn

What is Doodle? Doodle is a web service that helps Swami Reddy to find
a suitable date for meeting with a group of people. Learn more about
how Doodle works.
(https://doodle.com/main.html?tlink=checkOutLinktmail=poll_invitecontact_participant_invitation)

--

You have received this e-mail because Swami Reddy has invited you to
participate in the Doodle poll EC2 API Sub Team Weekly Meeting.



Doodle AG, Werdstrasse 21, 8021 Zürich
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-10 Thread Jay Pipes

On 02/10/2015 06:28 AM, Attila Fazekas wrote:

- Original Message -

From: Jay Pipes jaypi...@gmail.com
To: Attila Fazekas afaze...@redhat.com, OpenStack Development Mailing List (not 
for usage questions)
openstack-dev@lists.openstack.org
Cc: Pavel Kholkin pkhol...@mirantis.com
Sent: Monday, February 9, 2015 7:15:10 PM
Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody 
should know about Galera

On 02/09/2015 01:02 PM, Attila Fazekas wrote:

I do not see why not to use `FOR UPDATE` even with multi-writer or
Is the retry/swap way really solves anything here.

snip

Am I missed something ?


Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
that are needed to support SELECT FOR UPDATE statements across multiple
cluster nodes.


Galere does not replicates the row-level locks created by UPDATE/INSERT ...
So what to do with the UPDATE?


No, Galera replicates the write sets (binary log segments) for 
UPDATE/INSERT/DELETE statements -- the things that actually 
change/add/remove records in DB tables. No locks are replicated, ever.



Why should I handle the FOR UPDATE differently?


Because SELECT FOR UPDATE doesn't change any rows, and therefore does 
not trigger any replication event in Galera.


See here:

http://www.percona.com/blog/2014/09/11/openstack-users-shed-light-on-percona-xtradb-cluster-deadlock-issues/

-jay


https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ

Best,
-jay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-02-10 Thread Jeremy Stanley
On 2015-02-10 10:59:57 -0800 (-0800), James E. Blair wrote:
 And she is now a member of infra-core!  Thanks again!

Excellent--I have a list of stuff that... er, I mean welcome aboard!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2015-02-10 Thread Matt Riedemann



On 2/10/2015 1:08 PM, Matt Riedemann wrote:



On 8/17/2014 7:58 PM, Osanai, Hisashi wrote:


On Friday, August 15, 2014 8:48 PM, Ihar Hrachyshka wrote:

There was an issue with jenkins running py33 checks for stable
ceilometer branches, which is wrong. Should be fixed now.


Thank you for your response.
I couldn't solve this by myself but Dina Belova and Julien Danjou
solved this issue with:
https://review.openstack.org/#/c/113842/

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The ceilometer py33 job fails now with the same error on master as of 2/8:

https://bugs.launchpad.net/ceilometer/+bug/1420433



My mistake, it's stable/juno only, so something regressed in the project 
filtering there.


http://goo.gl/FOAZwB

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Dolph Mathews
+1

On Tue, Feb 10, 2015 at 11:51 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

 Hi everyone!

 I wanted to propose Marek Denis (marekd on IRC) as a new member of the
 Keystone Core team. Marek has been instrumental in the implementation of
 Federated Identity. His work on Keystone and first hand knowledge of the
 issues with extremely large OpenStack deployments has been a significant
 asset to the development team. Not only is Marek a strong developer working
 on key features being introduced to Keystone but has continued to set a
 high bar for any code being introduced / proposed against Keystone. I know
 that the entire team really values Marek’s opinion on what is going in to
 Keystone.

 Please respond with a +1 or -1 for adding Marek to the Keystone core team.
 This poll will remain open until Feb 13.

 --
 Morgan Fainberg

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Kevin Bringard (kevinbri)

 On Feb 10, 2015, at 9:21 AM, Dean Troyer dtro...@gmail.com wrote:
 
 On Tue, Feb 10, 2015 at 9:20 AM, Kevin Bringard (kevinbri) 
 kevin...@cisco.com wrote:
 ATC is only being given to folks committing to the current branch 
 (https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).
  
 Secondly, it's difficult to get stack-analytics credit for back ports, as the 
 preferred method is to cherry pick the code, and that keeps the original 
 author's name.
  
 My fear is that we're going in a direction where trunk is the sole focus and 
 we're subsequently going to lose the support of the majority of the operators 
 and enterprises at which point we'll be a fun research project, but little 
 more.
 
 [I've cherry-picked above what I think are the main points here... not 
 directed at you Kevin.]

No offense taken! I just wanted to start a conversation about it, so mission 
accomplished :-D

 
 This is not Somebody Else's Problem.
 
 Stable maintenance is Not Much Fun, no question.  Those who have demanded the 
 loudest that we (the development community) maintain these stable branches 
 need to be the one supporting it the most. (I have no idea how that matches 
 up today, so I'm not pointing out anyone in particular.) 
 
 * ATC credit should be given, stable branch maintenance is a contribution to 
 the project, no question.

100% agree, which really is my entire point. 

I also noticed that since the original message went out, it's been clarified 
that current release is meant to mean any OpenStack contributions during the 
current release cycle (paraphrase based on my reading of the clarification), 
which would include stable branches.  I'm personally OK with this. I don't 
think the foundation should cover the cost of anyone who's contributed even a 
single bug fix in the last year (or whatever time period). Mostly I just want 
to make sure we're not scaring people away from working on stable branches. 
Like we've stipulated, maintenance isn't a lot of fun nor is it high profile. 
If we don't give people an incentive to do it, it's entirely likely they'll 
just say screw it.

 * I have a bit of a problem with stack-analytics being an issue partially 
 because that is not what should be driving corporate contributions and 
 resource allocation.  But it does.  Relying on a system with known anomalies 
 like the cherry-pick problem gets imperfect results.

Also agree. The only reason I mention it is that, as you stated, a lot of 
companies use it as a metric, and it does matter. If you want to get an 
OpenStack related job, chances are if you don't show up in Stack Analytics, 
you're going to have a harder time of it.

Jeremy made a good point that we should work that out with SA, which is 
unrelated to OpenStack specifically. Perhaps it's just a matter of better 
documenting how to properly commit to stable branches if you want it to be 
tracked.

 * The vast majority of the OpenStack contributors are paid to do their work 
 by a (most likely) Foundation member company.  These companies choose how to 
 allocate their resources, some do quite well at scratching their particular 
 itches, some just make a lot of noise.  If fun is what drives them to select 
 where they apply resources, then they will reap what they sow.

Again, I completely agree. But, as we've seen companies like to tout what 
they're doing. I personally do a lot of work on the stable branches, upstream 
when I can, but a lot of times I'm doing work on stuff that's been EOL or for 
which the issue I'm working on isn't considered stability work. In those 
cases my work never goes upstream. Combine this with the SA issues, and that's 
a lot of out of band work.

 
 The voice of operators/users/deployers in this conversation should be 
 reflected through the entity that they are paying to provide operational 
 cloud services.  It's those directly consuming the code from openstack.org 
 that are responsible here because they are the ones directly making money by 
 either providing public/private cloud services, or reselling a productized 
 OpenStack or providing consulting services and the like.
 
 This should not stop users/operators from contributing information, 
 requirements or code in any way.  But if they have to go around their vendor 
 then that vendor has failed them.

This, I disagree with. Not only does it make the assumption that anyone running 
OS for profit is either chasing trunk or paying a vendor, but it creates a 
potential fragmentation nightmare and chokes down the number of entities 
willing to invest into the project.

If I'm paying vendor X for operational cloud services, and it's stated that 
they're who should be voicing my interests in the community 1) What happens 
when my interests and that of another customer of vendor X conflict? But 2) Why 
should I invest in the community? I'm paying a vendor, it's their problem. In 
fact, why do I even have operators who could potentially contribute 

Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Brant Knudson
+1

On Tue, Feb 10, 2015 at 11:51 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

 Hi everyone!

 I wanted to propose Marek Denis (marekd on IRC) as a new member of the
 Keystone Core team. Marek has been instrumental in the implementation of
 Federated Identity. His work on Keystone and first hand knowledge of the
 issues with extremely large OpenStack deployments has been a significant
 asset to the development team. Not only is Marek a strong developer working
 on key features being introduced to Keystone but has continued to set a
 high bar for any code being introduced / proposed against Keystone. I know
 that the entire team really values Marek’s opinion on what is going in to
 Keystone.

 Please respond with a +1 or -1 for adding Marek to the Keystone core team.
 This poll will remain open until Feb 13.

 --
 Morgan Fainberg

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2015-02-10 Thread Matt Riedemann



On 8/17/2014 7:58 PM, Osanai, Hisashi wrote:


On Friday, August 15, 2014 8:48 PM, Ihar Hrachyshka wrote:

There was an issue with jenkins running py33 checks for stable
ceilometer branches, which is wrong. Should be fixed now.


Thank you for your response.
I couldn't solve this by myself but Dina Belova and Julien Danjou
solved this issue with:
https://review.openstack.org/#/c/113842/

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The ceilometer py33 job fails now with the same error on master as of 2/8:

https://bugs.launchpad.net/ceilometer/+bug/1420433

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-02-10 Thread Elizabeth K. Joseph
On Tue, Feb 10, 2015 at 10:59 AM, James E. Blair cor...@inaugust.com wrote:
 cor...@inaugust.com (James E. Blair) writes:

 Hi,

 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:

   http://ci.openstack.org/project.html#team

 Elizabeth K. Joseph has been reviewing a significant number of infra
 patches for some time now.  She has taken on a number of very large
 projects, including setting up our Git server farm, adding support for
 infra servers running on CentOS, and setting up the Zanata translation
 system (and all of this without shell access to production machines).

 She understands all of our servers, regardless of function, size, or
 operating system.  She has frequently spoken publicly about the unique
 way in which we perform systems administration, articulating what we are
 doing and why in a way that inspires us as much as others.

 Due to her strong systems administration background, I am nominating her
 for both infra-core and infra-root simultaneously.  I expect many of us
 are looking forward to seeing her insight and direction applied with +2s
 but also equally excited for her to be able to troubleshoot things when
 our best-laid plans meet reality.

 Please respond with any comments or concerns.

 Thanks, Elizabeth, for all your work!

 And she is now a member of infra-core!  Thanks again!

Thank you everyone!

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Adam Young

On 02/10/2015 12:51 PM, Morgan Fainberg wrote:

Hi everyone!

I wanted to propose Marek Denis (marekd on IRC) as a new member of the 
Keystone Core team. Marek has been instrumental in the implementation 
of Federated Identity. His work on Keystone and first hand knowledge 
of the issues with extremely large OpenStack deployments has been a 
significant asset to the development team. Not only is Marek a strong 
developer working on key features being introduced to Keystone but has 
continued to set a high bar for any code being introduced / proposed 
against Keystone. I know that the entire team really values Marek’s 
opinion on what is going in to Keystone.


Please respond with a +1 or -1 for adding Marek to the Keystone core 
team. This poll will remain open until Feb 13.


--
Morgan Fainberg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+2 +A...
No?
Ah darn, guess I'll have to settle for +1.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Lance Bragstad
+1

On Tue, Feb 10, 2015 at 11:56 AM, David Stanek dsta...@dstanek.com wrote:

 +1

 On Tue, Feb 10, 2015 at 12:51 PM, Morgan Fainberg 
 morgan.fainb...@gmail.com wrote:

 Hi everyone!

 I wanted to propose Marek Denis (marekd on IRC) as a new member of the
 Keystone Core team. Marek has been instrumental in the implementation of
 Federated Identity. His work on Keystone and first hand knowledge of the
 issues with extremely large OpenStack deployments has been a significant
 asset to the development team. Not only is Marek a strong developer working
 on key features being introduced to Keystone but has continued to set a
 high bar for any code being introduced / proposed against Keystone. I know
 that the entire team really values Marek’s opinion on what is going in to
 Keystone.

 Please respond with a +1 or -1 for adding Marek to the Keystone core
 team. This poll will remain open until Feb 13.

 --
 Morgan Fainberg

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread David Stanek
+1

On Tue, Feb 10, 2015 at 12:51 PM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

 Hi everyone!

 I wanted to propose Marek Denis (marekd on IRC) as a new member of the
 Keystone Core team. Marek has been instrumental in the implementation of
 Federated Identity. His work on Keystone and first hand knowledge of the
 issues with extremely large OpenStack deployments has been a significant
 asset to the development team. Not only is Marek a strong developer working
 on key features being introduced to Keystone but has continued to set a
 high bar for any code being introduced / proposed against Keystone. I know
 that the entire team really values Marek’s opinion on what is going in to
 Keystone.

 Please respond with a +1 or -1 for adding Marek to the Keystone core team.
 This poll will remain open until Feb 13.

 --
 Morgan Fainberg

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-10 Thread Jay Pipes

On 02/10/2015 09:47 AM, Matthew Booth wrote:

On 09/02/15 18:15, Jay Pipes wrote:

On 02/09/2015 01:02 PM, Attila Fazekas wrote:

I do not see why not to use `FOR UPDATE` even with multi-writer or
Is the retry/swap way really solves anything here.

snip

Am I missed something ?


Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
that are needed to support SELECT FOR UPDATE statements across multiple
cluster nodes.

https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ


Is that the right link, Jay? I'm taking your word on the write-intent
locks not being replicated, but that link seems to say the opposite.


This link is better:

http://www.percona.com/blog/2014/09/11/openstack-users-shed-light-on-percona-xtradb-cluster-deadlock-issues/

Specifically the line:

The local record lock held by the started transation on pxc1 didn’t 
play any part in replication or certification (replication happens at 
commit time, there was no commit there yet).


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Steve Martinelli
+1

or +1, in this case.

Steve

Joe Savak joe.sa...@rackspace.com wrote on 02/10/2015 01:06:51 PM:

 From: Joe Savak joe.sa...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 02/10/2015 01:17 PM
 Subject: Re: [openstack-dev] [Keystone] Proposing Marek Denis for 
 the Keystone Core Team
 
 +1 !!
 
 On Tue, Feb 10, 2015 at 11:51 AM, Morgan Fainberg 
morgan.fainb...@gmail.com
  wrote:
 Hi everyone!
 
 I wanted to propose Marek Denis (marekd on IRC) as a new member of 
 the Keystone Core team. Marek has been instrumental in the 
 implementation of Federated Identity. His work on Keystone and first
 hand knowledge of the issues with extremely large OpenStack 
 deployments has been a significant asset to the development team. 
 Not only is Marek a strong developer working on key features being 
 introduced to Keystone but has continued to set a high bar for any 
 code being introduced / proposed against Keystone. I know that the 
 entire team really values Marek?s opinion on what is going in to 
Keystone.
 
 Please respond with a +1 or -1 for adding Marek to the Keystone core
 team. This poll will remain open until Feb 13.
 
 -- 
 Morgan Fainberg
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Brad Topol
+1!  Marek has been an outstanding Keystone contributor and reviewer!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   David Stanek dsta...@dstanek.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   02/10/2015 12:58 PM
Subject:Re: [openstack-dev] [Keystone] Proposing Marek Denis for 
the Keystone Core Team



+1

On Tue, Feb 10, 2015 at 12:51 PM, Morgan Fainberg 
morgan.fainb...@gmail.com wrote:
Hi everyone!

I wanted to propose Marek Denis (marekd on IRC) as a new member of the 
Keystone Core team. Marek has been instrumental in the implementation of 
Federated Identity. His work on Keystone and first hand knowledge of the 
issues with extremely large OpenStack deployments has been a significant 
asset to the development team. Not only is Marek a strong developer 
working on key features being introduced to Keystone but has continued to 
set a high bar for any code being introduced / proposed against Keystone. 
I know that the entire team really values Marek’s opinion on what is going 
in to Keystone.

Please respond with a +1 or -1 for adding Marek to the Keystone core team. 
This poll will remain open until Feb 13.

-- 
Morgan Fainberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Joe Savak
+1 !!

On Tue, Feb 10, 2015 at 11:51 AM, Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com wrote:
Hi everyone!

I wanted to propose Marek Denis (marekd on IRC) as a new member of the Keystone 
Core team. Marek has been instrumental in the implementation of Federated 
Identity. His work on Keystone and first hand knowledge of the issues with 
extremely large OpenStack deployments has been a significant asset to the 
development team. Not only is Marek a strong developer working on key features 
being introduced to Keystone but has continued to set a high bar for any code 
being introduced / proposed against Keystone. I know that the entire team 
really values Marek’s opinion on what is going in to Keystone.

Please respond with a +1 or -1 for adding Marek to the Keystone core team. This 
poll will remain open until Feb 13.

--
Morgan Fainberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Question about the plan of L

2015-02-10 Thread Walter A. Boring IV
Yes, assume NEW drivers have to land in before the L-1 milestone.  This 
also includes getting a CI system up and running.


Walt


Hi,

In Kilo the cinder driver is requested to be merged before K-1, I want 
to ask that in L does the driver will be requested to be merged before 
L-1?


Thanks and regards,

Liu



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

- -BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

The sentiment that Kevin is expressing here has come up informally at past 
Operator’s meetups as well, which makes sense given that relatively few 
operators are chasing trunk vs using a stable release.  I would hypothesize 
that there’s probably actually a fair bit of interest among operators in having 
well maintained stable branches but there are disincentives that keep them from 
pitching in more.  Let’s see if we can bring that to light a bit—I’ve added an 
item on the etherpad to discuss this in Philadelphia at the Operator’s midcycle 
meetup in a few weeks. [1]  If folks who are attending aren’t familiar with the 
current stable branch policies and team structure, you may want to read through 
the wiki first. [2]

[1] https://etherpad.openstack.org/p/PHL-ops-meetup

[2] https://wiki.openstack.org/wiki/StableBranch

At Your Service,

Mark T. Voelker
OpenStack Architect

On Feb 10, 2015, at 10:20 AM, Kevin Bringard (kevinbri) kevin...@cisco.com 
wrote:

Since this is sort of a topic change, I opted to start a new thread. I was 
reading over the Juno is Fubar at the Gate thread, and this bit stood out to 
me:

So I think it's time we called the icehouse branch and marked it EOL. We
originally conditioned the longer support window on extra people stepping
forward to keep things working. I believe this latest issue is just the latest
indication that this hasn't happened. Issue 1 listed above is being caused by
the icehouse branch during upgrades. The fact that a stable release was pushed
at the same time things were wedged on the juno branch is just the latest
evidence to me that things aren't being maintained as they should be. Looking at
the #openstack-qa irc log from today or the etherpad about trying to sort this
issue should be an indication that no one has stepped up to help with the
maintenance and it shows given the poor state of the branch.

Most specifically: 

We originally conditioned the longer support window on extra people stepping 
forward to keep things working ... should be an indication that no one has 
stepped up to help with the maintenance and it shows given the poor state of 
the branch.

I've been talking with a few people about this very thing lately, and I think 
much of it is caused by what appears to be our actively discouraging people 
from working on it. Most notably, ATC is only being given to folks committing 
to the current branch 
(https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).
 Secondly, it's difficult to get stack-analytics credit for back ports, as the 
preferred method is to cherry pick the code, and that keeps the original 
author's name. I've personally gotten a few commits into stable, but have 
nothing to show for it in stack-analytics (if I'm doing it wrong, I'm happy to 
be corrected).

My point here isn't to complain that I, or others, are not getting credit, but 
to point out that I don't know what we expected to happen to stable branches 
when we actively dis-incentivize people from working on them. Working on 
hardening old code is generally far less interesting than working on the cool 
shiny new features, and many of the productionalization issues we run into 
aren't uncovered until it's being run at scale which in turn is usually by a 
big company who likely isn't chasing trunk.

My fear is that we're going in a direction where trunk is the sole focus and 
we're subsequently going to lose the support of the majority of the operators 
and enterprises at which point we'll be a fun research project, but little more.

- - -- Kevin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

- -BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJU2i1eAAoJELUJLUWGN7CbmPEQAKV/9RPgKt6jwvq0bzhFCJPF
hz2LOC8M5Fk1wINGUcvwGjiphCwGMSD9p6IYx7PAzMrnbhLqQa0exCmo4DUi3jdV
qNC1A6juScrHjyQMcQ3dBS4Z4QEh0S964n2Ae/uoWydpDe8WGy4DQRLTNy+mCIg5
ROManHAWcQ3Cr5fYkFSeGQgaoROypj2Eebvv4kiYDPQVkjK1o49hpybxe5v0zR/Y
6kuacoZCK8h6X8b4CrbG+t/vCy8dqWIUB1j67VBojRDpe1p0yQqA3IJ72DLPfTw5
0GzUecfW781ZP5fHQSwhbg7t31UYXBpo9xszltXBiNynHRktA7BwYwj+YAFCKgNZ
sQ/gZOruqR+Os8/+pngA23PCGvuCUsTamUCkQUs4mCHjdvPq/BNFg0qGNeeheLQq
CzlldwqcPY5py3KfmipIZakH1wZ2S/DU/snuAhVatTjVHqO1leyk5asHYVVAnwCQ
96vawAHcIXEN4dPcXpcYBiiTE1mgq+0FQgVGsr4fQ2BkRYDN9rmOdVp+mG7b7QM0
jhK+POQqj+ojnQOKwA2ygQUglDY8MxjmfCrMukkWQylmYVb09Z0cOMFfMMw7YfU3
pWGP6BIfCManduqVBqFvTMxh/dCGIGq3LwrXo23qmukdgSIVRuj16XPZqXZ5xtv/
A6NV//pQXxvO4d+l4bBk
=cT3X
- -END PGP SIGNATURE-
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJU2i1jAAoJELUJLUWGN7CbsDIP/i0CyFC1FOL7SSC3IFLvVd9r

Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-10 Thread Joe Gordon
On Mon, Feb 9, 2015 at 7:55 PM, Michael Still mi...@stillhq.com wrote:

 The previous policy is that we do a release when requested or when a
 critical bug fix merges. I don't see any critical fixes awaiting
 release, but I am not opposed to a release.

 The reason I didn't do this yesterday is that Joe wanted some time to
 pin the stable requirements, which I believe he is still working on.
 Let's give him some time unless this is urgent.


So to move this forward, lets just pin novaclient on stable branches. so
the longer term pin all the reqs isn't blocking this.

Icehouse already has a cap, so we just need to wait for the juno cap to
land:

https://review.openstack.org/154680



 Michael

 On Tue, Feb 10, 2015 at 2:45 PM, melanie witt melwi...@gmail.com wrote:
  On Feb 6, 2015, at 8:17, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:
 
  We haven't done a release of python-novaclient in awhile (2.20.0 was
 released on 2014-9-20 before the Juno release).
 
  It looks like there are some important feature adds and bug fixes on
 master so we should do a release, specifically to pick up the change for
 keystone v3 support [1].
 
  So can this be done now or should this wait until closer to the Kilo
 release (library releases are cheap so I don't see why we'd wait).
 
  Thanks for bringing this up -- there are indeed a lot of important
 features and fixes on master.
 
  I agree we should do a release as soon as possible, and I don't think
 there's any reason to wait until closer to Kilo.
 
  melanie (melwitt)
 
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Rackspace Australia

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] upcoming review.openstack.org IP address change

2015-02-10 Thread Jeremy Stanley
It's that time again... on Saturday, March 21, 2015, the OpenStack
Project Infrastructure team is upgrading the operating system on
which review.openstack.org runs, and that means a new virtual
machine instance with new IP addresses assigned by our service
provider. The new IP addresses will be as follows:

104.130.159.134
2001:4800:7818:102:be76:4eff:fe05:9b12

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.

http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with further details including information on the
time that Gerrit will be unavailable in subsequent announcements.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Create VM using port-create vs nova boot only?

2015-02-10 Thread Kevin Benton
As pointed out by the examples in the other replies, you would essentially
have to support every possible parameter to neutron port-create in nova
boot. That's creating unnecessary knowledge of neutron in nova. If you had
to eliminate one of the two, the second workflow should actually be the one
to go because that would support a better separation of concerns.

On Mon, Feb 9, 2015 at 10:21 PM, Wanjing Xu wanjing...@hotmail.com wrote:

 There seemed to be two ways to create a VM via cli:

 1) use neutron command to create a port first and then use nova command to
 attach the vm to that port(neutron port-create.. followed by nova boot
 --nic port-id=)
 2)Just use nova command and a port will implicitly be created for you(nova
 boot --nic net-id=net-uuid).

 My question is : is #2 sufficient enough to cover all the scenarios?  In
 other words, if we are not allowed to use #1(can only use #2 to create vm),
 would we miss anything?

 Regards!
 Wanjing Xu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-10 Thread Kyle Mestery
On Tue, Feb 10, 2015 at 9:19 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Tue, Feb 10, 2015, at 07:25 AM, Sean Dague wrote:
  On 02/09/2015 10:55 PM, Michael Still wrote:
   The previous policy is that we do a release when requested or when a
   critical bug fix merges. I don't see any critical fixes awaiting
   release, but I am not opposed to a release.
  
   The reason I didn't do this yesterday is that Joe wanted some time to
   pin the stable requirements, which I believe he is still working on.
   Let's give him some time unless this is urgent.
 
  Going forward I'd suggest that we set a goal to do a monthly nova-client
  release to get fixes out into the wild in a more regular cadence. Would
  be nice to not have this just land as a big bang release at the end of a
  cycle.

 We review the changes in Oslo libraries weekly. Is there any reason not
 to do the same with client libs? Given the automation in place for
 creating releases, I think the whole process (including release notes)
 is down to just a few minutes now. The tagging script is in the
 openstack-infra/release-tools repository and I'd be happy to put the
 release notes script there, too, if others want to use it.

 ++, I'd love to see that script land there Doug!

And I agree, doing this releases is fairly straightforward now, so perhaps
they should occur with a more regular cadence.

Thanks,
Kyle


 Doug

 
-Sean
 
  
   Michael
  
   On Tue, Feb 10, 2015 at 2:45 PM, melanie witt melwi...@gmail.com
 wrote:
   On Feb 6, 2015, at 8:17, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:
  
   We haven't done a release of python-novaclient in awhile (2.20.0 was
 released on 2014-9-20 before the Juno release).
  
   It looks like there are some important feature adds and bug fixes on
 master so we should do a release, specifically to pick up the change for
 keystone v3 support [1].
  
   So can this be done now or should this wait until closer to the Kilo
 release (library releases are cheap so I don't see why we'd wait).
  
   Thanks for bringing this up -- there are indeed a lot of important
 features and fixes on master.
  
   I agree we should do a release as soon as possible, and I don't think
 there's any reason to wait until closer to Kilo.
  
   melanie (melwitt)
  
  
  
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
 
 
  --
  Sean Dague
  http://dague.net
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Donald Stufft

 On Feb 10, 2015, at 3:17 PM, Ian Cordasco ian.corda...@rackspace.com wrote:
 
 
 And of course, the chosen solution should be mappable to database, so
 we may do sorting and filtering on the DB-side.
 So, having it as a simple string and letting the user to decide what
 it means is not an option.
 
 Except for the fact that versions do typically mean more than the values
 SemVer attaches to them. SemVer is further incompatible with any
 versioning scheme using epochs and is so relatively recent compared to
 versioning practices as a whole that I don’t see how we can justify
 restricting what should be a very generic system to something so specific
 to recent history and favored almost entirely by *developers*.

Semver vs PEP 440 is largely a syntax question since PEP 440 purposely does not
have much of an opinion on how something like 2.0.0 and 2.1.0 are related other
than for sorting. We do have operators in PEP 440 that support treating these
versions in a semver like way, and some that support treating them in other
ways.

The primary purpose of PEP 440 was to define a standard way to parse and sort
and specify versions across several hundred thouse versions that currently
exist in PyPI. This means that it is more complicated to implement but it is
much more powerful than semver eve could be. One example, as Ian mentioned is
the lack of the ability to do an Epoch, another example is that PEP 440 has
explicit support for someone taking version 1.0 adding some unofficial patches
to it, and then releasing that in their own distribution channels.

The primary purpose of Semver was to be extremely opinionated in what meaning
you place on the *content* of the version parts and the syntax is really a
secondary concern which exists just to make it easier to parse. This means that
if you know ahead of time that something is Semver you can guess a lot more
information about the relationship of two versions.

It was our intention that PEP 440 would (is?) aimed primarily at people
implementing tools that work with versions, and the additional PEPs or other
documentations would be written on top of PEP 440 to add opinions on what a
version looks like within the framework that PEP 440 sets up. A great example
is the pbr semver document that Monty linked.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Adrian Otto
I agree with Alexander on this. We should certainly learn what we can from 
existing software. That said, the Solum team really wants this feature in 
Glance so we can leverage that instead of having our own repository for Heat 
templates we generate when building apps. We want to keep our requirements list 
small, and Glance is already there.

Adrian

On Feb 10, 2015, at 10:01 AM, Alexander Tivelkov ativel...@mirantis.com wrote:

 Hi Ian,
 
 Automatic version generation is not the only and not the primary
 reason for the version concept. In fact, the implementation which is
 planned to land in this cycle does not contain this feature at all:
 currently we also leave the version assignment up to uploader (version
 is a regular immutable generic artifact property). Auto-increments as
 part of clone-and-modify scenarios are postponed for the next cycle.
 
 However, even now we do need to have some sorting order - so, we need
 rules to determine precedence. That's the reason for having some
 notation defined: if we leave the notation up to the end-user we won't
 be able to compare artifacts having versions in different notations.
 And we even can't leave it up to the Artifact Type developer, since
 this is a generic property, thus common for all the artifact types.
 And of course, the chosen solution should be mappable to database, so
 we may do sorting and filtering on the DB-side.
 So, having it as a simple string and letting the user to decide what
 it means is not an option.
 
 Speaking about Artifactory - that's entirely different thing. It is
 indeed a continuous delivery solution, composed around build machines,
 deployment solutions and CI systems. That's definitely not what Glance
 Artifact Repository is. Even the concepts of Artifact are entirely
 different.  So, while Artifact Repository may be used to build some CD
 solutions on top of it (or to be integrated with the existing ones) it
 is not a storage solution for build outputs and thus I can barely see
 how we may compare them.
 
 --
 Regards,
 Alexander Tivelkov
 
 
 On Tue, Feb 10, 2015 at 8:15 PM, Ian Cordasco
 ian.corda...@rackspace.com wrote:
 
 
 On 2/10/15, 10:35, Alexander Tivelkov ativel...@mirantis.com wrote:
 
 Thanks Monty!
 
 Yup, probably I've missed that. I was looking at pbr and its version
 implementation, but didn't realize that this is actually a fusion of
 semver and pep440.
 
 So, we have this as an extra alternative to choose from.
 
 It would be an obvious choice if we were just looking for some common
 solution to version objects within openstack. However, I am a bit
 concerned about applying it to Artifact Repository. As I wrote before,
 we are trying to make the Repository to be language- and
 platform-agnostic tool for other developers, including the ones
 originating from non-python and non-openstack worlds. Having a
 versioning notation which is non-standard for everybody but openstack
 developers does not look like a good idea to me.
 --
 Regards,
 Alexander Tivelkov
 
 
 On Tue, Feb 10, 2015 at 6:55 PM, Monty Taylor mord...@inaugust.com
 wrote:
 On 02/10/2015 10:28 AM, Alexander Tivelkov wrote:
 Hi folks,
 
 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.
 
 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking advice
 on this choice.
 
 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.
 
 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.
 
 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.
 
 According to my initial vision, the Artifact Repository should be as
 generic as possible in terms of potential adoption. The artifacts were
 never supposed to be python packages only, and even the projects which
 will create and use these artifacts are not mandatory limited to be
 pythonic, the developers of that projects may not be python
 

Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Ian Cordasco
On 2/10/15, 12:01, Alexander Tivelkov ativel...@mirantis.com wrote:

Hi Ian,

Automatic version generation is not the only and not the primary
reason for the version concept. In fact, the implementation which is
planned to land in this cycle does not contain this feature at all:
currently we also leave the version assignment up to uploader (version
is a regular immutable generic artifact property). Auto-increments as
part of clone-and-modify scenarios are postponed for the next cycle.

However, even now we do need to have some sorting order - so, we need
rules to determine precedence. That's the reason for having some
notation defined: if we leave the notation up to the end-user we won't
be able to compare artifacts having versions in different notations.
And we even can't leave it up to the Artifact Type developer, since
this is a generic property, thus common for all the artifact types.

I think the fundamental disconnect is that not every column in a database
needs offer sorting to the user. Imposing that restriction here causes a
cascade of further restrictions that will fundamentally cause this to be
unusable by a large number of people.

And of course, the chosen solution should be mappable to database, so
we may do sorting and filtering on the DB-side.
So, having it as a simple string and letting the user to decide what
it means is not an option.

Except for the fact that versions do typically mean more than the values
SemVer attaches to them. SemVer is further incompatible with any
versioning scheme using epochs and is so relatively recent compared to
versioning practices as a whole that I don’t see how we can justify
restricting what should be a very generic system to something so specific
to recent history and favored almost entirely by *developers*.

Speaking about Artifactory - that's entirely different thing. It is
indeed a continuous delivery solution, composed around build machines,
deployment solutions and CI systems. That's definitely not what Glance
Artifact Repository is. Even the concepts of Artifact are entirely
different.  So, while Artifact Repository may be used to build some CD
solutions on top of it (or to be integrated with the existing ones) it
is not a storage solution for build outputs and thus I can barely see
how we may compare them.

Because up until now the only use case you’ve been referencing is CD
software.

--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 8:15 PM, Ian Cordasco
ian.corda...@rackspace.com wrote:


 On 2/10/15, 10:35, Alexander Tivelkov ativel...@mirantis.com wrote:

Thanks Monty!

Yup, probably I've missed that. I was looking at pbr and its version
implementation, but didn't realize that this is actually a fusion of
semver and pep440.

So, we have this as an extra alternative to choose from.

It would be an obvious choice if we were just looking for some common
solution to version objects within openstack. However, I am a bit
concerned about applying it to Artifact Repository. As I wrote before,
we are trying to make the Repository to be language- and
platform-agnostic tool for other developers, including the ones
originating from non-python and non-openstack worlds. Having a
versioning notation which is non-standard for everybody but openstack
developers does not look like a good idea to me.
--
Regards,
Alexander Tivelkov


On Tue, Feb 10, 2015 at 6:55 PM, Monty Taylor mord...@inaugust.com
wrote:
 On 02/10/2015 10:28 AM, Alexander Tivelkov wrote:
 Hi folks,

 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.

 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking
advice
 on this choice.

 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory
numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.

 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.

 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version
number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.

 According to my initial vision, 

Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Ian Cordasco
On 2/10/15, 13:55, Jay Pipes jaypi...@gmail.com wrote:

On 02/10/2015 12:15 PM, Ian Cordasco wrote:
 So Semantic Versioning, as I’ve already mentioned in the past, isn’t
 really a de facto standard in any language community but it is a
language
 agnostic proposal. That said, just because it’s language agnostic does
not
 mean it won’t conflict with other language’s versioning semantics. Since
 we’re effectively reinventing an existing open source solution here, I
 think we should look to how Artifactory [1] handles this.

Reinventing an existing open source solution is a bit off-the-mark
IMO. Artifactory is more of a SaaS solution through bintray.com --
paying some lip-service to open source more than anything else.

Except the base service is entirely free and open and thus can be deployed
anywhere. And people looking to rebuild CD services in OpenStack should
probably refer to existing implementations (of which I only pointed out
the one I had already heard about).


 I haven’t used artifactory very much but a cursory look makes it
apparent
 that it is strongly decoupling the logic of version management with
 artifact management (which this set of changes isn’t doing in Glance).

Alex's spec is merely proposing to standardize on a single way of
managing versioning. It isn't coupling artifact *management* with
anything whatsoever. In fact, the idea of Glance being an artifact
repository has nothing to do with the management of said artifacts --
which things like Heat, Murano or Solum handle in different ways.

Except that Alex is proposing that this would include a number of things,
some of which can and will be versioned in a way that will not work with
the proposed solution.

The idea behind the Glance artifact repository was to store a
discoverable *schema* for various objects, along with the object
metadata itself. I can see Nova and Cinder flavors, Glance, Docker or
AppC image metadata, Cinder snapshot and differential metadata, Murano
application manifests, Heat templates, and Solum deployment components
all being described (schemas) and stored in the Glance artifact
repository.

What I have not heard from anyone is the idea to marry the management of
the things that this artifact metadata with Glance itself.

Right. That’s the other concern I have with Artifacts. Images /can/ be
/eventually/ represented as artifacts, but for now we’re grafting what
will functionally be an entirely separate project (for at least the next
cycle, if not longer) onto Glance while that new project is unstable and
at best experimental.

 The primary argument (as I understand it) for using SemVer with
Artifacts
 in glance is to have a way to automatically have the service create the
 version number for the uploader. Artifactory (and it’s “Pro” version)
seen
 to leave that to the build tooling. In other words, it is purely
storage.
 It seems to sort versions alphabetically (which everyone can agree is
not
 only suboptimal but wrong) but it also provides the user a way to alter
 how it performs sorting.

I don't know where you get the idea that a service (Glance, you mean?)
would automatically create the version number for some artifact. The
spec talks only about the ability of the artifact registry to sort a set
of artifacts by incrementing version number, and do so in a reasonably
short time frame -- i.e. have a database column indexed for that purpose.

Several discussions in IRC have happened around Artifacts and the SemVer
spec in which this is a feature that Alex wants to add as a consequence of
this. 


 Is there a reason (beyond not being an OpenStack project) that
Artifactory
 isn’t being considered by those pushing for Artifacts to provide a CD
 service?

Because we're not looking to create a CD service? :) AFAIK, the artifact
repository is a generic storage system for schemas and metadata, nothing
more.

Except the common explanation for why we /need/ Artifacts is the example
of people wanting to set up a CD system and the discussions that have been
held elsewhere point to this being so much more than just that in terms of
responsibility of managing versions and other (specifically, build)
artifacts.


Best,
-jay

  Judging by the support it seems to roughly have for other
 services (via a “Pro” version) it would appear to be able to suit any
this
 need well with little work by the deployer who needs to serve more than
 one use case.

 [1] http://www.jfrog.com/open-source/#os-arti

 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Clint Byrum
Excerpts from Alexander Tivelkov's message of 2015-02-10 07:28:55 -0800:
 Hi folks,
 
 One of the key features that we are adding to Glance with the
 introduction of Artifacts is the ability to have multiple versions of
 the same object in the repository: this gives us the possibility to
 query for the latest version of something, keep track on the changes
 history, and build various continuous delivery solutions on top of
 Artifact Repository.
 
 We need to determine the format and rules we will use to define,
 increment and compare versions of artifacts in the repository. There
 are two alternatives we have to choose from, and we are seeking advice
 on this choice.
 
 First, there is Semantic Versioning specification, available at [1].
 It is a very generic spec, widely used and adopted in many areas of
 software development. It is quite straightforward: 3 mandatory numeric
 components for version number, plus optional string labels for
 pre-release versions and build metadata.
 
 And then there is PEP-440 spec, which is a recommended approach to
 identifying versions and specifying dependencies when distributing
 Python. It is a pythonic way to set versions of python packages,
 including PIP version strings.
 
 Conceptually PEP-440 and Semantic Versioning are similar in purpose,
 but slightly different in syntax. Notably, the count of version number
 components and rules of version precedence resolution differ between
 PEP-440 and SemVer. Unfortunately, the two version string formats are
 not compatible, so we have to choose one or the other.
 
 According to my initial vision, the Artifact Repository should be as
 generic as possible in terms of potential adoption. The artifacts were
 never supposed to be python packages only, and even the projects which
 will create and use these artifacts are not mandatory limited to be
 pythonic, the developers of that projects may not be python
 developers! So, I'd really wanted to avoid any python-specific
 notations, such as PEP-440 for artifacts.
 
 I've put this vision into a spec [3] which also contains a proposal on
 how to convert the semver-compatible version strings into the
 comparable values which may be mapped to database types, so a database
 table may be queried, ordered and filtered by the object version.
 
 So, we need some feedback on this topic. Would you prefer artifacts to
 be versioned with SemVer or with PEP-440 notation? Are you interested
 in having some generic utility which will map versions (in either
 format) to database columns? If so, which version format would you
 prefer?
 
 We are on a tight schedule here, as we want to begin landing
 artifact-related code soon. So, I would appreciate your feedback
 during this week: here in the ML or in the comments to [3] review.
 

Hi. This is really interesting work and I'm glad Glance is growing into
an artifact catalog as I think it will assist cloud users and UI
development at the same time.

It seems to me that there are really only two reasons to care about the
content of the versions: sorting, and filtering. You want to make sure
if people upload artifacts named myapp like this:

myapp:1.0 myapp:2.0 myapp:1.1

That when they say show me the newest myapp they get 2.0, not 1.1.

And if they say show me the newest myapp in the 1.x series they get 1.1.

I am a little worried this is not something that can or should be made
generic in a micro service.

Here's a thought: You could just have the version, series, and sequence,
and let users manage the sequencing themselves on the client side. This
way if users want to use the _extremely_ difficult to program for Debian
packaging version, you don't have to figure out how to make 1.0~special
less than 1.0 and more than 0.9.

To start with, you can have a default strategy of a single series, and
max(sequence)+1000 if unspecified. Then teach the clients the various
semvers/pep440's/etc. etc. and let them choose their own sequencing and
series strategy.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2015-02-10 Thread Oleg Bondarev
On Tue, Feb 10, 2015 at 5:26 PM, Feodor Tersin fter...@cloudscaling.com
wrote:

 I definitely don't expect any change of the existing port in the case with
 two nics. However in the case of single nic a question like 'what is impact
 of security-groups parameter' arises.
 Also a similar question arises out of '--nic port-id=xxx,v4-fixed-ip=yyy'
 combination.
 Moreover, if we assume that, for example, security-groups parameter
 affects the specified port, the next question is 'what is the result group
 set'. Does it replace groups of the port, or just update them?

 Thus i agree with you, that this part of nova API is not clear now. But
 the case with two nics make sense, works now, and can be used by someone.
 Do you really want to break it?


I don't want to break anything :)
I guess the only option then is to just log a warning that security groups
are ignored in case port_id is provided on boot -
but this still leaves a chance of broken user expectations.




 On Tue, Feb 10, 2015 at 10:29 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:



 On Mon, Feb 9, 2015 at 8:50 PM, Feodor Tersin fter...@cloudscaling.com
 wrote:

 nova boot ... --nic port-id=xxx --nic net-id=yyy
 this case is valid, right?
 I.e. i want to boot instance with two ports. The first port is
 specified, but the second one is created at network mapping stage.
 If i specify a security group as well, it will be used for the second
 port (if not - default group will):
 nova boot ... --nic port-id=xxx --nic net-id=yyy --security-groups sg-1
 Thus a port and a security group can be specified together.


 The question here is what do you expect for the existing port - it's
 security groups updated or not?
 Will it be ok to silently (or with warning in logs) ignore security
 groups for it?
 If it's ok then is it ok to do the same for:
 nova boot ... --nic port-id=xxx --security-groups sg-1
 where the intention is clear enough?



 On Mon, Feb 9, 2015 at 7:14 PM, Matt Riedemann 
 mrie...@linux.vnet.ibm.com wrote:



 On 9/26/2014 3:19 AM, Christopher Yeoh wrote:

 On Fri, 26 Sep 2014 11:25:49 +0400
 Oleg Bondarev obonda...@mirantis.com wrote:

  On Fri, Sep 26, 2014 at 3:30 AM, Day, Phil philip@hp.com wrote:

I think the expectation is that if a user is already interaction
 with Neutron to create ports then they should do the security group
 assignment in Neutron as well.


 Agree. However what do you think a user expects when he/she boots a
 vm (no matter providing port_id or just net_id)
 and specifies security_groups? I think the expectation should be that
 instance will become a member of the specified groups.
 Ignoring security_groups parameter in case port is provided (as it is
 now) seems completely unfair to me.


 One option would be to return a 400 if both port id and security_groups
 is supplied.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Coming back to this, we now have a change from Oleg [1] after an
 initial attempt that was reverted because it would break server creates if
 you specified a port (because the original change would blow up when the
 compute API added the 'default' security group to the request').

 The new change doesn't add the 'default' security group to the request
 so if you specify a security group and port on the request, you'll now get
 a 400 error response.

 Does this break API compatibility?  It seems this falls under the first
 bullet here [2], A change such that a request which was successful before
 now results in an error response (unless the success reported previously
 was hiding an existing error condition).  Does that caveat in parenthesis
 make this OK?

 It seems like we've had a lot of talk about warts in the compute v2 API
 for cases where an operation is successful but didn't yield the expected
 result, but we can't change them because of API backwards compatibility
 concerns so I'm hesitant on this.

 We also definitely need a Tempest test here, which I'm looking into.  I
 think I can work this into the test_network_basic_ops scenario test.

 [1] https://review.openstack.org/#/c/154068/
 [2] https://wiki.openstack.org/wiki/APIChangeGuidelines#
 Generally_Not_Acceptable

 --

 Thanks,

 Matt Riedemann


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [Manila] Manila virtual midcycle meetup

2015-02-10 Thread Danny Al-Gaaf
Am 11.02.2015 um 04:10 schrieb Ben Swartzlander:
 https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
 
 This is a reminder that the meetup is tomorrow! It will be entirely
 virtual, so please join the Google Hangout or the phone bridge. The
 details are in the etherpad.

Do you have by any chance an European/German number for the phone
bridge? From the list of attendees also a Shanghai number may helps.

Danny




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Marcos Fermin Lobo

+1!

Cheers,
Marcos.


On 02/10/2015 11:26 PM, openstack-dev-requ...@lists.openstack.org wrote:

Hi everyone!

I wanted to propose Marek Denis (marekd on IRC) as a new member of the
Keystone Core team. Marek has been instrumental in the implementation of
Federated Identity. His work on Keystone and first hand knowledge of the
issues with extremely large OpenStack deployments has been a significant
asset to the development team. Not only is Marek a strong developer working
on key features being introduced to Keystone but has continued to set a
high bar for any code being introduced / proposed against Keystone. I know
that the entire team really values Marek?s opinion on what is going in to
Keystone.

Please respond with a +1 or -1 for adding Marek to the Keystone core team.
This poll will remain open until Feb 13.

--
Morgan Fainberg



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] docstring standard?

2015-02-10 Thread Brian Curtin
On Tue, Feb 10, 2015 at 1:57 PM, Min Pae sputni...@gmail.com wrote:
 I think most people would agree documentation is a good thing, and
 consistency is generally a good thing…  is there an accepted standard on
 layout and minimum required fields?

 If not, should there be?

 For example

 - Heading (short description)
 - Description
 - Inputs
- Input name + description
- Input type
 - Outputs
- Output description
- Output type

 vs

 - Heading (short description)
 - Inputs
- Input name + description
- Input type
 - Outputs
- Output description
- Output type
 - Description


 I generally like the former, but given that the docstring in python follows
 the function/method header rather than precedes it, it seems like the latter
 would be better for python so that descriptions of the inputs and outputs
 are not separated by lengthy descriptions.

 Comments/opinions?

In python-openstacksdk we go the route that has the description at the
top, which is partially driven by our use of autodoc. It looks and
reads better to describe what a method does and then show the options,
versus giving a one sentence description, a bunch of detailed options,
and then telling you a paragraph about what it can do.

Even if you're not generating documentation out of it, having the
in/out parameters as the last thing makes it a little more usable when
reading the code itself.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Filtering by metadata values

2015-02-10 Thread Miguel Grinberg
Hi,

We had a discussion yesterday on the Heat channel regarding patterns for 
searching or filtering entities by its metadata values. This is in relation to 
a feature that is currently being implemented in Heat called “Stack Tags”.

The idea is that Heat stack lists can be filtered by these tags, so for 
example, any stacks that you don’t want to see you can tag as “hidden”, then 
when you request a stack list you can specify that you only want stacks that do 
not have the “hidden” tag.

We were trying to find other similar usages of tags and/or metadata within 
OpenStack projects, where these are not only stored as data, but are also used 
in database queries for filtering. A quick search revealed nothing, which is 
surprising.

Is there anything I may have missed? I would like to know if there anything 
even remotely similar, so that we don’t build a new wheel if one already exists 
for this.

Thanks,

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >