Re: [openstack-dev] [qa] Tempest Bug triage

2014-09-12 Thread Mauro S M Rodrigues

On 09/11/2014 04:52 PM, David Kranz wrote:
So we had a Bug Day this week and the results were a bit disappointing 
due to lack of participation. We went from 124 New bugs to 75. There 
were also many cases where bugs referred to logs that no longer 
existed. This suggests that we really need to keep up with bug triage 
in real time. Since bug triage should involve the Core review team, we 
propose to rotate the responsibility of triaging bugs weekly. I put up 
an etherpad here 
https://etherpad.openstack.org/p/qa-bug-triage-rotation and I hope the 
tempest core review team will sign up. Given our size, this should 
involve signing up once every two months or so. I took next week.


 -David
+1, I'm not core team but I just assigned myself to the last week of 
September and first of December.


Also, given the bad quality of some reports we may want to come up with 
a template of need to have data on the bug reports. I really haven't 
look at it lately but we use to have several reports with just a bunch 
of traces, or just a link..


  --  mauro(sr)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Bug Day kickoff

2014-03-19 Thread Mauro S M Rodrigues

Morning QA Team,

I hope everybody can help on this effort so we can release icehouse with 
less bugs as possible.


So this morning, March, 19th, 12:00 UTC the current tempest's bug picture is

Current picture of Tempest's bugs:
 * 166 Open Bugs https://bugs.launchpad.net/tempest/+bugs
   - 17 classified as wishlist
   - 8 marked as fix committed, *should we wait for Icehouse to mark it 
as Released?* (or if we're doing it right now can we adopt this policy 
for ever?)


*Now our focus are the 142 left (http://bit.ly/1iAMbBA)*
   - 62 to triage and prioritize, some of them already have assignee, 
http://bit.ly/1cXhPcF (this includes incomplete ones that were answered).

   - 29 to prioritize  http://bit.ly/1dcL9MO
   - 51 of them In Progress http://bit.ly/1hzsjfH ordered by the ones 
with less activity, which we need to reach the current assignees and see 
the current status (and maybe assign someone else to take care of it).
   - 4 are incomplete without answer and we may try to reach the 
reporter to get an update.



Some actions, from the first email:

On 03/12/2014 09:31 PM, Mauro S M Rodrigues wrote:

== Actions ==
Basically I'm proposing the follow actions for the QA Bug Day, nothing 
much new here:


1st - Triage those 48 bugs in [1], this includes:
* Prioritize it;
* Mark any duplications;
* Add tags and any other project that can be related to the bug so 
we can have the right eyes on it;
* Some cool extra stuff: comments with any suggestions, links to 
logstash queries so we can have the real dimension of how critic the 
bug in question is;


2nd - Assign yourself to some of the unassigned bugs if possible so we 
can squash it eventually.


3rd - Dedicate some time to review the 51 In Progress bugs  AND/OR be 
in touch with the current assignee in case the bug hadn't recent 
activity so we can put it back into triage steps. 



Thanks,

mauro(sr)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Reminder: Bug Day - Wed, 19th

2014-03-18 Thread Mauro S M Rodrigues

Hey! I Just want to reminder everybody about the bug day tomorrow.

Thanks

On 03/12/2014 09:31 PM, Mauro S M Rodrigues wrote:

Hello everybody!

In the last QA meeting I stepped ahead and volunteered to organize 
another QA Bug Day.


This week wasn't a good one, so I thought to schedule it to the next 
Wednesday (March, 19th). If you think we need more time or something, 
please let me know.


== Actions ==
Basically I'm proposing the follow actions for the QA Bug Day, nothing 
much new here:


1st - Triage those 48 bugs in [1], this includes:
* Prioritize it;
* Mark any duplications;
* Add tags and any other project that can be related to the bug so 
we can have the right eyes on it;
* Some cool extra stuff: comments with any suggestions, links to 
logstash queries so we can have the real dimension of how critic the 
bug in question is;


2nd - Assign yourself to some of the unassigned bugs if possible so we 
can c(see [2])


3rd - Dedicate some time to review the 55 In Progress bugs (see [3]) 
AND/OR be in touch with the current assignee in case the bug hadn't 
recent activity (see [4]) so we can put it back into triage steps.


And depending on how the things happen, I would suggest to not forget 
Grenade which is also part of the QA Program and extend that effort 
into it (see Grenade References with the same indexes of tempest's).


So that's pretty much it, I would like to hear any suggestion or 
opinion that you guys may have.



== Tempest references ==
[1] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=field.status%3Alist=NEWfield.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.status%3Alist=INPROGRESSfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHfield.importance%3Alist=MEDIUMfield.importance%3Alist=LOWassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.status=In+Progress
[4] - 
https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.status=In+Progressorderby=date_last_updated


== Grenade references ==
[1] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=field.status%3Alist=NEWfield.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.status%3Alist=INPROGRESSfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHfield.importance%3Alist=MEDIUMfield.importance%3Alist=LOWassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/grenade/+bugs?search=Searchfield.status=In+Progress
[4] - 
https://bugs.launchpad.net/grenade/+bugs?search=Searchfield.status=In+Progressorderby=date_last_updated






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest] Bug Day - Wed, 19th

2014-03-12 Thread Mauro S M Rodrigues

Hello everybody!

In the last QA meeting I stepped ahead and volunteered to organize 
another QA Bug Day.


This week wasn't a good one, so I thought to schedule it to the next 
Wednesday (March, 19th). If you think we need more time or something, 
please let me know.


== Actions ==
Basically I'm proposing the follow actions for the QA Bug Day, nothing 
much new here:


1st - Triage those 48 bugs in [1], this includes:
* Prioritize it;
* Mark any duplications;
* Add tags and any other project that can be related to the bug so 
we can have the right eyes on it;
* Some cool extra stuff: comments with any suggestions, links to 
logstash queries so we can have the real dimension of how critic the bug 
in question is;


2nd - Assign yourself to some of the unassigned bugs if possible so we 
can c(see [2])


3rd - Dedicate some time to review the 55 In Progress bugs (see [3]) 
AND/OR be in touch with the current assignee in case the bug hadn't 
recent activity (see [4]) so we can put it back into triage steps.


And depending on how the things happen, I would suggest to not forget 
Grenade which is also part of the QA Program and extend that effort into 
it (see Grenade References with the same indexes of tempest's).


So that's pretty much it, I would like to hear any suggestion or opinion 
that you guys may have.



== Tempest references ==
[1] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=field.status%3Alist=NEWfield.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/tempest/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.status%3Alist=INPROGRESSfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHfield.importance%3Alist=MEDIUMfield.importance%3Alist=LOWassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.status=In+Progress
[4] - 
https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.status=In+Progressorderby=date_last_updated


== Grenade references ==
[1] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=field.status%3Alist=NEWfield.status%3Alist=INCOMPLETE_WITH_RESPONSE
[2] - 
https://bugs.launchpad.net/grenade/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfield.status%3Alist=INPROGRESSfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHfield.importance%3Alist=MEDIUMfield.importance%3Alist=LOWassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
[3] - 
https://bugs.launchpad.net/grenade/+bugs?search=Searchfield.status=In+Progress
[4] - 
https://bugs.launchpad.net/grenade/+bugs?search=Searchfield.status=In+Progressorderby=date_last_updated



--
mauro(sr)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-28 Thread Mauro S M Rodrigues

Awesome! thanks for it!

Btw I guess this will automatically works for grenade, since we use 
devstack to setup X-1 release, am I right? (and it's not a concern for 
the upgrade part since the upgrade-component scripts already contain 
errexit trap on the cleanup functions right?)


--
mauro(sr)


On 02/27/2014 06:17 PM, Sergey Lukjanov wrote:

And a big +1 from me too. It's really useful.

On Fri, Feb 28, 2014 at 12:15 AM, Devananda van der Veen
devananda@gmail.com wrote:

  Thu, Feb 27, 2014 at 9:34 AM, Ben Nemec openst...@nemebean.com wrote:

On 2014-02-27 09:23, Daniel P. Berrange wrote:

On Thu, Feb 27, 2014 at 08:38:22AM -0500, Sean Dague wrote:

This patch is coming through the gate this morning -
https://review.openstack.org/#/c/71996/

The point being to actually make devstack stop when it hits an error,
instead of only once these compound to the point where there is no
moving forward and some service call fails. This should *dramatically*
improve the experience of figuring out a failure in the gate, because
where it fails should be the issue. (It also made us figure out some
wonkiness with stdout buffering, that was making debug difficult).

This works on all the content that devstack gates against. However,
there are a ton of other paths in devstack, including vendor plugins,
which I'm sure aren't clean enough to run under -o errexit. So if all of
a sudden things start failing, this may be why. Fortunately you'll be
pointed at the exact point of the fail.


This is awesome!


+1!  Thanks Sean and everyone else who was involved with this.


Another big +1 for this! I've wished for it every time I tried to add
something to devstack and struggled with debugging it.

-Deva

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3

2013-07-01 Thread Mauro S M Rodrigues
One more though, about os-multiple-create: I was also thinking to remove 
it, I don't see any real advantage to use it since it doesn't offer any 
kind of flexibility like chose different flavors, images and other 
attributes. So anyone creating multiple servers would probably prefer an 
external automation tool instead of multiple server IMHO.


So anyone using it? There are a good reason to keep it? Did I miss 
something about this extension?


On 06/28/2013 09:31 AM, Christopher Yeoh wrote:

Hi,

The following is a list of API extensions for which there are no plans 
to port. Please shout if you think any of them needs to be!


baremetal_nodes.py
os_networks.py
networks_associate.py
os_tenant_networks.py
virtual_interfaces.py
createserverext.py
floating_ip_dns.py
floating_ip_pools.py
floating_ips_bulk.py
floating_ips.py
cloudpipe.py
cloudpipe_update.py
volumes.py

Also I'd like to propose that after H2 any new API extension submitted 
HAS to have a v3 version. That will give us enough time to ensure that 
the V3 API in Havana can do everything that the V2 one except where we 
explicitly don't want to support something.


For developers who have had new API extensions merged in H2 but 
haven't submitted a v3 version, I'd appreciate it if you could check 
the following etherpad to see if your extension is on the list and put 
it on there ASAP if it isn't there already:


https://etherpad.openstack.org/NovaV3ExtensionPortWorkList

I've tried to keep track of new API extensions to make sure we do v3 
ports but may have missed some.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-01 Thread Mauro S M Rodrigues

+1.. make sense to me, I always thought that was weird hehe
Say the word and we will remove it from v3.

On 07/01/2013 01:02 PM, Russell Bryant wrote:

On 07/01/2013 11:47 AM, Jay Pipes wrote:

Recently a colleague asked me whether their key pair from one of our
deployment zones would be usable in another deployment zone. His
identity credentials are shared between the two zones (we use a shared
identity database) and was wondering if the key pairs were also shared.

I responded that no, they were not, because Nova, not Keystone, manages
key pairs. But that got me thinking is it time to change this?

Key pairs really are an element of identity/authentication, and not
specific to OpenStack Compute. Has there been any talk of moving the key
pair management API out of Nova and into Keystone?

I haven't heard any talk about it, but it does seem to make sense.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev