Re: [openstack-dev] [Cinder] FFE for vmdk-storage-policy-volume-type

2014-03-07 Thread John Griffith
Shouldn't be a problem, I'll sort through the items but I was looking
earlier and didn't see anything concerning.  All of them have been in
process for some time so I think an exception is fair.
On Mar 6, 2014 10:40 AM, Subramanian subramanian.neelakan...@gmail.com
wrote:

 Hi,


 https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type
 .

 This is a blueprint that I am working on since Dec 2013 and as far I
 remember it was targetted to icehouse-3. Just today I noticed that it was
 moved to future, so should have feel through the cracks for core
 reviewers.Is there a chance that this can still make it into icehouse?
 Given that the change is fairly isolated in vmdk driver, and that the code
 across 4 patches [1] that implement this blueprint has been fairly
 reviewed, can I request for an FFE for this one?

 Thanks,
 Subbu

 [1]
 https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/vmdk-storage-policy-volume-type,n,z

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread John Griffith
On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com wrote:

 On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
  It seems to be an interesting idea. In fact, a China-based public IaaS,
 QingCloud, has provided a similar feature
  to their virtual servers. Within 2 hours after a virtual server is
 deleted, the server owner can decide whether
  or not to cancel this deletion and re-cycle that deleted virtual
 server.
 
  People make mistakes, while such a feature helps in urgent cases. Any
 idea here?

 Nova has soft_delete and restore for servers. That sounds similar?

 John

 
  -Original Message-
  From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
  Sent: Thursday, March 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection
 
  Hi all,
 
  Current openstack provide the delete volume function to the user.
  But it seems there is no any protection for user's delete operation miss.
 
  As we know the data in the volume maybe very important and valuable.
  So it's better to provide a method to the user to avoid the volume
 delete miss.
 
  Such as:
  We can provide a safe delete for the volume.
  User can specify how long the volume will be delay deleted(actually
 deleted) when he deletes the volume.
  Before the volume is actually deleted, user can cancel the delete
 operation and find back the volume.
  After the specified time, the volume will be actually deleted by the
 system.
 
  Any thoughts? Welcome any advices.
 
  Best regards to you.
 
 
  --
  zhangleiqiang
 
  Best Regards
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I think a soft-delete for Cinder sounds like a neat idea.  You should file
a BP that we can target for Juno.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread John Griffith
On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang zhangleiqi...@huawei.comwrote:

 Hi, stackers:

 Libvirt/qemu have supported online-extend for multiple disk
 formats, including qcow2, sparse, etc. But Cinder only support
 offline-extend volumes currently.

 Offline-extend volume will force the instance to be shutoff or the
 volume to be detached. I think it will be useful if we introduce the
 online-extend feature to cinder, especially for the file system based
 driver, e.g. nfs, glusterfs, etc.

 Is there any other suggestions?

 Thanks.


 --
 zhangleiqiang

 Best Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Zhangleiqiang,

So yes, there's a rough BP for this here: [1], and some of the folks from
the Trove team (pdmars on IRC) have actually started to dive into this.
 Last I checked with him there were some sticking points on the Nova side
but we should synch up with Paul, it's been a couple weeks since I've last
caught up with him.

Thanks,
John
[1]:
https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-02 Thread John Griffith
On Sun, Mar 2, 2014 at 7:42 PM, Sean Dague s...@dague.net wrote:

 Coming in at slightly less than 1 million log lines in the last 7 days:

 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOX0=

 This application has not enabled MySQL traditional mode, which means
 silent data corruption may occur

 This is being generated by  *.openstack.common.db.sqlalchemy.session in
 at least nova, glance, neutron, heat, ironic, and savana


 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOSwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6Im1vZHVsZSJ9


 At any rate, it would be good if someone that understood the details
 here could weigh in about whether is this really a true WARNING that
 needs to be fixed or if it's not, and just needs to be silenced.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I came across this earlier this week when I was looking at this in Cinder,
haven't completely gone into detail here, but maybe Florian or Doug have
some insight?

https://bugs.launchpad.net/oslo/+bug/1271706
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-01 Thread John Griffith
Hey,

I just wanted to send out a quick note on a topic that came up recently.
 Unfortunately the folks that I'd like to read this most; don't participate
on the ML typically, but I'd at least like to raise some community
awareness.

We all know OpenStack is growing at a rapid pace and has a lot of promise,
so much so that there's an enormous field of vendors and OS distributions
that are focusing a lot of effort and marketing on the project.

Something that came up recently in the Cinder project is that one of the
backend device vendors wasn't happy with a feature that somebody was
working on and contributed a patch for.  Instead of providing a meaningful
review and suggesting alternatives to the patch they set up meetings with
other vendors leaving the active members of the community out and picked
things apart in their own format out of the public view.  Nobody from the
core Cinder team was involved in these discussions or meetings (at least
that I've been made aware of).

I don't want to go into detail about who, what, where etc at this point.  I
instead, I want to point out that in my opinion this is no way to operate
in an Open Source community.  Collaboration is one thing, but ambushing
other peoples work is entirely unacceptable in my opinion.  OpenStack
provides a plethora of ways to participate and voice your opinion, whether
it be this mailing list, the IRC channels which are monitored daily and
also host a published weekly meeting for most projects.  Of course when in
doubt you're welcome to send me an email at any time with questions or
concerns that you have about a patch.  In any case however the proper way
to address concerns about a submitted patch is to provide a review for that
patch.

Everybody has a voice and the ability to participate, and the most
effective way to do that is by thorough, timely and constructive code
reviews.

I'd also like to point out that while a number of companies and vendors
have fancy taglines like The Leaders of OpenStack, they're not.
 OpenStack is a community effort, as of right now there is no company that
leads or runs OpenStack.  If you have issues or concerns on the development
side you need to take those up with the development community, not vendor
xyz.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Unit test cases failing with error 'cannot import rpcapi'

2014-02-18 Thread John Griffith
On Tue, Feb 18, 2014 at 1:21 PM, iKhan ik.ibadk...@gmail.com wrote:
 Hi All,

 All cinder test cases are failing with error 'cannot import rpcapi', though
 same files work fine in live cinder setup. I wonder what's going wrong when
 unit testing is triggered. Can any one help me out here?

 --
 Thanks,
 IK

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I just pulled a fresh clone and am not seeing any issues on my side.
Could it be a problem with your env?  Are you running venv?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread John Griffith
On Thu, Feb 13, 2014 at 9:59 AM, Walter A. Boring IV
walter.bor...@hp.com wrote:
 On 02/13/2014 02:51 AM, Thierry Carrez wrote:

 John Griffith wrote:

 So we've talked about this a bit and had a number of ideas regarding
 how to test and show compatibility for third-party drivers in Cinder.
 This has been an eye opening experience (the number of folks that have
 NEVER run tempest before, as well as the problems uncovered now that
 they're trying it).

 I'm even more convinced now that having vendors run these tests is a
 good thing and should be required.  That being said there's a ton of
 push back from my proposal to require that results from a successful
 run of the tempest tests to accompany any new drivers submitted to
 Cinder.

 Could you describe the nature of the pushback ? Is it that the tests are
 too deep and reject valid drivers ? Is it that it's deemed unfair to
 block new drivers while the existing ones aren't better ? Is it that
 it's difficult for them to run those tests and get a report ? Or is it
 because they care more about having their name covered in mainline and
 not so much about having the code working properly ?

 The consensus from the Cinder community for now is that we'll
 log a bug for each driver after I3, stating that it hasn't passed
 certification tests.  We'll then have a public record showing
 drivers/vendors that haven't demonstrated functional compatibility,
 and in order to close those bugs they'll be required to run the tests
 and submit the results to the bug in Launchpad.

 So, this seems to be the approach we're taking for Icehouse at least,
 it's far from ideal IMO, however I think it's still progress and it's
 definitely exposed some issues with how drivers are currently
 submitted to Cinder so those are positive things that we can learn
 from and improve upon in future releases.

 To add some controversy and keep the original intent of having only
 known tested and working drivers in the Cinder release, I am going to
 propose that any driver that has not submitted successful functional
 testing by RC1 that that driver be removed.  I'd at least like to see
 driver maintainers try... if the test fails a test or two that's
 something that can be discussed, but it seems that until now most
 drivers just flat out are not even being tested.

 I think there are multiple stages here.

 Stage 0: noone knows if drivers work
 Stage 1: we know the (potentially sad) state of the drivers that are in
 the release
 Stage 2: only drivers that pass tests are added, drivers that don't pass
 tests have a gap analysis and a plan to fix it
 Stage 3: drivers that fail tests are removed before release
 Stage 4: 3rd-party testing rigs must run tests on every change in order
 to stay in tree

 At the very minimum you should be at stage 1 for the Icehouse release,
 so I agree with your last paragraph. I'd recommend that you start the
 Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
 the end of the Juno release.

 I have to agree with Thierry here.  I think if we can get drivers to pass
 the tests
 in the Juno timeframe, then it's fine to remove then during Juno.
 I think the idea of having drivers run their code through tempest and work
 towards passing all of those tests is a great thing for Cinder and OpenStack
 in general.

 What I would do different for the Icehouse release is this:

 If a driver doesn't pass the certification test by IceHouse RC1, then we
 have a bug filed
 against the driver.   I would also put a warning message in the log for that
 driver that it
 doesn't pass the certification test.  I would not remove it from the
 codebase.

 Also:
if a driver hasn't even run the certification test by RC1, then we mark
 the driver as
 uncertified and deprecated in the code and throw an error at driver init
 time.
 We can have a option in cinder.conf that says
 ignore_uncertified_drivers=False.
 If an admin wants to ignore the error, they set the flag to True, and we let
 the driver init at next startup.
 The admin then takes full responsibility for running uncertified code.

   I think removing the drivers outright is premature for Icehouse, since the
 certification process is a new thing.
 For Juno, we remove any drivers that are still marked as uncertified and
 haven't run the tests.

 I think the purpose of the tests is to get vendors to actually run their
 code through tempest and
 prove to the community that they are willing to show that they are fixing
 their code.  At the end of the day,
 it better serves the community and Cinder if we have many working drivers.

 My $0.02,
 Walt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
I'm fine with all of the recommendations above, however I do want to
point out that having your driver/device work in OpenStack should not
be something new to you.  That's what's so

Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread John Griffith
On Thu, Feb 13, 2014 at 10:30 AM, Dean Troyer dtro...@gmail.com wrote:
 On Thu, Feb 13, 2014 at 4:51 AM, Thierry Carrez thie...@openstack.org
 wrote:

 John Griffith wrote:
  To add some controversy and keep the original intent of having only
  known tested and working drivers in the Cinder release, I am going to
  propose that any driver that has not submitted successful functional
  testing by RC1 that that driver be removed.  I'd at least like to see
  driver maintainers try... if the test fails a test or two that's
  something that can be discussed, but it seems that until now most
  drivers just flat out are not even being tested.


 +1


 I think there are multiple stages here.

 Stage 0: noone knows if drivers work
 Stage 1: we know the (potentially sad) state of the drivers that are in
 the release
 Stage 2: only drivers that pass tests are added, drivers that don't pass
 tests have a gap analysis and a plan to fix it
 Stage 3: drivers that fail tests are removed before release
 Stage 4: 3rd-party testing rigs must run tests on every change in order
 to stay in tree

 At the very minimum you should be at stage 1 for the Icehouse release,
 so I agree with your last paragraph. I'd recommend that you start the
 Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
 the end of the Juno release.


 Are any of these drivers new for Icehouse?  I think adding broken drivers in
 Icehouse is a mistake.  The timing WRT Icehouse release schedule is
 unfortunate but so is shipping immature drivers that have to be supported
 and possibly deprecated.  Should new drivers that are lacking have some
 not-quite-supported status to allow them to be removed in Juno if not
 brought up to par?  Or moved into cinder/contrib?

Yes, there are a boatload of new drivers being added.


 I don't mean to be picking on Cinder here, this seems to be recurring theme
 in OpenStack.  I think we benefit from strengthening the precedent that
 makes it harder to get things in that are not ready even if the timing is
 inconvenient.  We're seeing this in project incubation and I think we all
 benefit in the end.

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I have another tact we can take on this in the interim. I like the
contrib dir idea raised by Dean, a hybrid of that and the original
proposal is we leave the certification optional, but we publish a
certified driver list.  We can also use the contrib idea with that as
well, so the contrib dir would denote drivers that are not officially
certified.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-12 Thread John Griffith
Hey,

So we've talked about this a bit and had a number of ideas regarding
how to test and show compatibility for third-party drivers in Cinder.
This has been an eye opening experience (the number of folks that have
NEVER run tempest before, as well as the problems uncovered now that
they're trying it).

I'm even more convinced now that having vendors run these tests is a
good thing and should be required.  That being said there's a ton of
push back from my proposal to require that results from a successful
run of the tempest tests to accompany any new drivers submitted to
Cinder.  The consensus from the Cinder community for now is that we'll
log a bug for each driver after I3, stating that it hasn't passed
certification tests.  We'll then have a public record showing
drivers/vendors that haven't demonstrated functional compatibility,
and in order to close those bugs they'll be required to run the tests
and submit the results to the bug in Launchpad.

So, this seems to be the approach we're taking for Icehouse at least,
it's far from ideal IMO, however I think it's still progress and it's
definitely exposed some issues with how drivers are currently
submitted to Cinder so those are positive things that we can learn
from and improve upon in future releases.

To add some controversy and keep the original intent of having only
known tested and working drivers in the Cinder release, I am going to
propose that any driver that has not submitted successful functional
testing by RC1 that that driver be removed.  I'd at least like to see
driver maintainers try... if the test fails a test or two that's
something that can be discussed, but it seems that until now most
drivers just flat out are not even being tested.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grizzly volume quotas

2014-02-05 Thread John Griffith
On Wed, Feb 5, 2014 at 3:09 PM, Jay S Bryant jsbry...@us.ibm.com wrote:
 Joe,

 Ah!  So, those aren't for Cinder Volume but for nova-volume.  Ok, so there
 isn't really a bug then.


Yep, this is left over from when volumes were in nova.

 Sorry for speaking too quickly.  Thanks for the info!


 Jay S. Bryant
IBM Cinder Subject Matter ExpertCinder Core Member
 Department 7YLA, Building 015-2, Office E125, Rochester, MN
 Telephone: (507) 253-4270, FAX (507) 253-6410
 TIE Line: 553-4270
 E-Mail:  jsbry...@us.ibm.com
 
 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey
 



 From:Joe Gordon joe.gord...@gmail.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:02/05/2014 04:03 PM
 Subject:Re: [openstack-dev] Grizzly volume quotas
 



 On Wed, Feb 5, 2014 at 1:21 PM, Jay S Bryant jsbry...@us.ibm.com wrote:
 Pat,

 I see the same behavior on an Icehouse level install.  So, I think you may
 have found a bug.

 So the bug here isn't what you expect.

 First a bit of background.

 * python-novaclient isn't part of the integrated release and needs to
 support most releases (not just the most recent).
 * python-novaclient doesn't have any mechanism to detect what commands
 a cloud supports and hide the other commands  [This is the bug].

 So nova client needs to support nova-volume, which is why we still
 have the volume quota options.


 I would open the bug to python-novaclient to start with, but it may end up
 coming back to Cinder.


 Jay S. Bryant
IBM Cinder Subject Matter ExpertCinder Core Member
 Department 7YLA, Building 015-2, Office E125, Rochester, MN
 Telephone: (507) 253-4270, FAX (507) 253-6410
 TIE Line: 553-4270
 E-Mail:  jsbry...@us.ibm.com
 
 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey
 



 From:Pat Bredenberg patrick.bredenb...@oracle.com
 To:openstack-dev@lists.openstack.org,
 Date:02/05/2014 03:05 PM
 Subject:[openstack-dev] Grizzly volume quotas
 



 Dear all,

 I'm part of the team bringing OpenStack to Solaris and am confused
 about how volume quotas appear according to nova(1).  We're using
 Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other
 configuration information you need.  The raw data itself is available
 here: http://paste.openstack.org/show/62667/.
 Is it a bug that volumes appears as a configurable quota via
 nova(1), according to its help menu?  I'll apologize in advance if this
 has already been documented elsewhere and/or addressed in Havana or
 Icehouse.  I searched but didn't see it mentioned.  If it's a bug that
 has yet to be filed and should be addressed, please let me know and I'll
 gladly file the bug.  Otherwise, I'll chalk it up as a learning
 experience.  Your guidance is greatly appreciated.

 Very respectfully,
 Pat Bredenberg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread John Griffith
On Wed, Feb 5, 2014 at 5:54 PM, Rochelle.RochelleGrober
rochelle.gro...@huawei.com wrote:
 On Wed, Feb 5, 2014 at 12:05 PM, Russell Bryant rbry...@redhat.com wrote:

 On 02/05/2014 11:22 AM, Thierry Carrez wrote:
 (This email is mostly directed to PTLs for programs that include one
 integrated project)

 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.

 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition

 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.

 Comments, thoughts ?


 The process you suggest is what I would prefer.  (PTLs writing proposals
 for TC to approve)

 Using the governance repo makes sense as a means for the PTLs to post
 their proposals for review and approval of the TC.



 +1



 +1



 Who gets final say if there's strong disagreement between a PTL and the
 TC?  Hopefully this won't matter, but it may be useful to go ahead and
 clear this up front.



 The Board has some say in this, too, right? The proposal [1] is for a set of
 tests to be proposed and for the Board to approve (section 8).



 What is the relationship between that test suite and the designated core
 areas? It seems that anything being tested would need to be designated as
 core. What about the inverse?



 The test suite should validate that the core
 capabilities/behaviors/functionality behave as expected (positive and
 negative testing in an integrated environment).  So, the test suites would
 need to be reviewed for applicability.  Maybe, like Gerrit, there would be
 voting and nonvoting parts of tests based on whether something outside of
 core gets exercised in the process of running some tests.  Whatever the
 case, I doubt that the tests would generate a simple yes/no, but rather a
 score.  An discussion of one of the subsets of capabilities for Nova might
 start with the capabilities highlighted on this page:

 https://wiki.openstack.org/wiki/HypervisorSupportMatrix



 The test suite would need to exercise the capabilities in these sorts of
 matrices and might product the A/B/C grades as the rest of the page
 elucidates.


Sorry but I think this misses the point of the PTL request being made
here.  The question being asked is not is the interface compatible,
it's quite possible for somebody to build a cloud without a single
piece of OpenStack code but still provide an OpenStack compatible
interface and mimic behaviors.  IMO compatibility tests already exist
for the most part via the Tempest test suite that we use to gate on.
If I'm incorrect and that is in fact the goal, that's significantly
easier to solve IMO.

The question here as I understand it (and I may be confused again
based on the thread here) is what parts of the code/modules are
required to be used in order for somebody building a cloud to say
it's an OpenStack cloud?  The cheat answer for me would be, you have
to be using cinder-api, cinder-scheduler and cinder-volume services
(regardless of driver).  That raises the next layer of detail though,
do those services have to be un-modified?  How much modification is
acceptable etc. What about deployments that may use their own
scheduler?

I think the direction the thread is taking here is that there really
isn't enough information to make this call, and there certainly isn't
enough understanding of the intent, meaning or ramifications.


 --Rocky



 Doug



 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition






 --
 Russell Bryant


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder + taskflow

2014-02-03 Thread John Griffith
On Mon, Feb 3, 2014 at 1:53 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Hi all,

 After talking with john g. about taskflow in cinder and seeing more and
 more reviews showing up I wanted to start a thread to gather all our
 lessons learned and how we can improve a little before continuing to add
 too many more refactoring and more reviews (making sure everyone is
 understands the larger goal and larger picture of switching pieces of
 cinder - piece by piece - to taskflow).

 Just to catch everyone up.

 Taskflow started integrating with cinder in havana and there has been some
 continued work around these changes:

 - https://review.openstack.org/#/c/58724/
 - https://review.openstack.org/#/c/66283/
 - https://review.openstack.org/#/c/62671/

 There has also been a few other pieces of work going in (forgive me if I
 missed any...):

 - https://review.openstack.org/#/c/64469/
 - https://review.openstack.org/#/c/69329/
 - https://review.openstack.org/#/c/64026/

 I think now would be a good time (and seems like a good idea) to create
 the discussion to learn how people are using taskflow, common patterns
 people like, don't like, common refactoring idioms that are occurring and
 most importantly to make sure that we refactor with a purpose and not just
 refactor for refactoring sake (which can be harmful if not done
 correctly). So to get a kind of forward and unified momentum behind
 further adjustments I'd just like to make sure we are all aligned and
 understood on the benefits and yes even the drawbacks that these
 refactorings bring.

 So here is my little list of benefits:

 - Objects that do just one thing (a common pattern I am seeing is
 determining what the one thing is, without making it to granular that its
 hard to read).
 - Combining these objects together in a well-defined way (once again it
 has to be carefully done to not create to much granularness).
 - Ability to test these tasks and flows via mocking (something that is
 harder when its not split up like this).
 - Features that aren't currently used such as state-persistence (but will
 help cinder become more crash-resistant in the future).
   - This one will itself need to be understood before doing [I started
 etherpad @ https://etherpad.openstack.org/p/cinder-taskflow-persistence
 for this].

 List of drawbacks (or potential drawbacks):

 - Having a understanding of what taskflow is doing adds on a new layer of
 things to know (hopefully docs help in this area, that was there goal).
 - Selecting to granular of a task or flow; makes it harder to
 follow/understand the task/flow logic.
 - Focuses on the long-term (not necessarily short-term) state-management
 concerns (can't refactor rome in a day).
 - Taskflow is being developed at the same time cinder is.

 I'd be very interested in hearing about others experiences and to make
 sure that we discuss the changes (in a well documented and agreed on
 approach) before jumping to much into the 'deep end' with a large amount
 of refactoring (aka, refactoring with a purpose). Let's make this thread
 as useful as we can and try to see how we can unify all these refactorings
 behind a common (and documented  agreed-on) purpose.

 A thought, for the reviews above, I think it would be very useful to
 etherpad/writeup more in the blueprint what the 'refactoring with a
 purpose' is so that its more known to future readers (and for active
 reviewers), hopefully this email can start to help clarify that purpose so
 that things proceed as smoothly as possible.

 -Josh


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks for putting this together Josh, I just wanted to add a couple
of things from my own perspective.

The end-goals of taskflow (specifically persistence and better state
managment) are the motivating factors for going this route.  We've
made a first step with create_volume however we haven't advanced it
enough to realize the benefits that we set out to gain by this in the
first place.  I still think it's the right direction and IMO we should
keep on the path, however there are a number of things that I've
noticed that make me lean towards refraining from moving other API
calls to taskflow right now.

1. Currently taskflow is pretty much a functional equivalent
replacement of what was in the volume manager.  We're not really
gaining that much from it (yet).

2. taskflow adds quite a bit of code and indirection that currently
IMHO adds a bit of complexity and difficulty in trouble-shooting (I
think we're fixing this up and it will continue to get better, I also
think this is normal for introduction of new implementations, no
criticism intended).

3. Our unit testing / mock infrastructure is broken right now for
items that use taskflow.  Particularly cinder.test.test_volume can not
be run independently until we fix the taskflow fakes and mock objects.
 

Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread John Griffith
On Mon, Jan 27, 2014 at 6:07 AM, Sean Dague s...@dague.net wrote:
 Back at the beginning of the cycle, I pushed for the idea of doing some
 log harmonization, so that the OpenStack logs, across services, made
 sense. I've pushed a proposed changes to Nova and Keystone over the past
 couple of days.

 This is going to be a long process, so right now I want to just focus on
 making INFO level sane, because as someone that spends a lot of time
 staring at logs in test failures, I can tell you it currently isn't.

 https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
 written down so far, comments welcomed.

 We kind of need to solve this set of recommendations once and for all up
 front, because negotiating each change, with each project, isn't going
 to work (e.g - https://review.openstack.org/#/c/69218/)

 What I'd like to find out now:

 1) who's interested in this topic?
 2) who's interested in helping flesh out the guidelines for various log
 levels?
 3) who's interested in helping get these kinds of patches into various
 projects in OpenStack?
 4) which projects are interested in participating (i.e. interested in
 prioritizing landing these kinds of UX improvements)

 This is going to be progressive and iterative. And will require lots of
 folks involved.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Very interested in all of the above.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Code proposal deadline for Icehouse

2014-01-24 Thread John Griffith
On Fri, Jan 24, 2014 at 8:26 AM, Russell Bryant rbry...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/23/2014 08:31 PM, Michael Basnight wrote:

 On Jan 23, 2014, at 5:10 PM, Mark McClain wrote:


 On Jan 23, 2014, at 5:02 PM, Russell Bryant rbry...@redhat.com
 wrote:

 Greetings,

 Last cycle we had A feature proposal deadline across some
 projects. This was the date that code associated with
 blueprints had to be posted for review to make the release.
 This was in advance of the official feature freeze (merge
 deadline).

 Last time this deadline was used by 5 projects across 3
 different dates [1].

 I would like to add a deadline for this again for Nova.  I'm
 thinking 2 weeks ahead of the feature freeze right now, which
 would be February 18th.

 I'm wondering if it's worth coordinating on this so the
 schedule is less confusing.  Thoughts on picking a single date?
 How's Feb 18?

 I like the idea of selecting a single date. Feb 18th fits with
 the timeline the Neutron team has used in the past.

 So, Feb 19~21 is the trove mid cycle sprint, which means we might
 push last minute finishing touches on things during those 3 days.
 Id prefer the next week of feb if at all possible. Otherwise im ok
 w/ FFE's and such if im in the minority, because i do think a
 single date would be best for everyone.

 So, +0 from trove. :D

 That makes sense.  It's worth saying that if we have this deadline,
 every PTL should be able to grant exceptions on a case by case basis.
  I think things getting finished up in your meetup is a good case for
 a set of exceptions.

 - --
 Russell Bryant
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iEYEARECAAYFAlLihg0ACgkQFg9ft4s9SAYbJwCffD0hFkNvHgl6+S0U4ez4VLKQ
 TlkAoIvNzuv3YazKo2Y0cFAnh6WLPWR2
 =k5bu
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I'm in agreement with trying out coordination of the dates this time around.

I am concerned about the date (Feb 18) given the issues we've had with
the Gate etc.  It feels a bit early at just over three weeks,
especially now that we've punted most of our I2 blueprints.

I'm on board though, and the 18'th isn't unreasonable.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate Status - Friday Edition

2014-01-24 Thread John Griffith
On Fri, Jan 24, 2014 at 11:37 AM, Clay Gerrard clay.gerr...@gmail.com wrote:


 That's a pretty high rate of failure, and really needs investigation.


 That's a great point, did you look into the logs of any of those jobs?
 Thanks for bringing it to my attention.

 I saw a few swift tests that would pop, I'll open bugs to look into those.
 But the cardinality of the failures (7) was dwarfed by jenkins failures I
 don't quite understand.

 [EnvInject] - [ERROR] - SEVERE ERROR occurs: java.lang.InterruptedException
 (e.g.
 http://logs.openstack.org/86/66986/3/gate/gate-swift-python27/2e6a8fc/console.html)

 FATAL: command execution failed | java.io.InterruptedIOException (e.g.
 http://logs.openstack.org/84/67584/5/gate/gate-swift-python27/4ad733d/console.html)

 These jobs are blowing up setting up the workspace on the slave, and we're
 not automatically retrying them?  How can this only be effecting swift?

It's certainly not just swift:

http://logstash.openstack.org/#eyJzZWFyY2giOiJcImphdmEuaW8uSW50ZXJydXB0ZWRJT0V4Y2VwdGlvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwNTg5MTg4NjY5fQ==


 -Clay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changes coming in gate structure

2014-01-22 Thread John Griffith
On Wed, Jan 22, 2014 at 1:39 PM, Sean Dague s...@dague.net wrote:
 
 Changes coming in gate structure
 

 Unless you've been living under a rock, on the moon, around Saturn,
 you'll have noticed that the gate has been quite backed up the last 2
 weeks. Every time we get towards a milestone this gets measurably
 worse, and the expectation at is at i3 we're going to see at least 40%
 more load than we are dealing with now (if history is any indication),
 which doesn't bode well.

 It turns out, when you have a huge and rapidly growing Open Source
 project, you keep finding scaling limits in existing software, your
 software, and approaches in general. It also turns out that you find
 out that you need to act defensively on situations that you didn't
 think you'd have to worry about. Like code reviews with 3 month old
 test results being put into the review queue. Or code that *can't*
 pass (which a look at the logs would show) being reverified in the
 gate.

 All of these things compound on the fact that there are real bugs in
 OpenStack, which end up having a non linear failure effect. Once you
 get past a certain point the failure rates multiply to the point where
 everything stops (which happened Sunday, when we only merged 4 changes
 in 24 hrs).

 The history of the gate structure is a long one. It was added in
 Diablo when there was a project which literally would not run with
 the other OpenStack components. The idea of gating merge of everything
 on everything else is to ensure we have some understanding that
 OpenStack actually works, all together, for some set of
 configurations.

 It wasn't until Folsom cycle that we started running these tests before
 Human review (kind of amazing).

 The gate is also based on an assumption that most of the bugs we are
 catching are outside to project, vs. bugs that are already in the
 project. However, in an asynchronous system, bugs can show up only
 very occasionally, and get past our best efforts to detect them, then
 pile up in the code base until we rout them out.

 =
 Towards a Svelter Gate - Leaning on Check
 =

 We've got a current plan of attack to try to maintain nearly the same
 level of integration test guarantees, and hope to make it so on the
 merge side we're able to get more throughput. This is a set of things
 that all have to happen at once to not completely blow out the
 guarantees we've got in the source.

 Make a clean recent Check prereq for entering gate
 ==

 A huge compounding problem has been patches that can't pass being
 promoted to the gate. So we're going to make Zuul able to enforce a
 recent clean check scorecard before going into the gate. Our working
 theory of recent is last 24hrs.

 If it doesn't have a recent set of check results on +A, we'll trigger
 a check rerun, and if clean, it gets sent to the gate.

 We'll also probably add a sweeper to zuul so it will refresh results
 on changes that are getting comments on them that are older than some
 number of days automatically.

 Svelt Gate
 ==

 The gate jobs will be trimmed down immensely. Nothing project
 specific, so pep8 / unit tests all ripped out, no functional test
 runs. Less overall configs. Exactly how minimal we'll figure out as we
 decide what we can live without. The floor for this would be
 devstack-tempest-full and grenade.

 This is basically sanity check that the combination of patches in
 flight doesn't ruin the world for everyone.

 Idle Cloud for Elastic Recheck Bugs
 ===

 We have actually been using gate as double duty, both as ensuring
 integration, but also as a set of clean test results to figure out
 what bugs are in OpenStack that only show up from time to time. The
 check queue is way too noisy, as our system actually blocks tons of
 bad code from getting in.

 With the Svelt gate, we'll need a set of background nodes to build
 that dataset. But with elastic search we now have the technology, so
 this is good.

 It will let us work these issues in parallel. This issues will still
 cause people pain in getting clean results in check.

 =
 Timelines, Dangers, and Opportunities
 =

 We need changes soon. Every past experience is milestone 3 is 40%
 heavier than milestone 2, and nothing indicates that icehouse is going
 to be any different. So Jim's put getting these required bits into
 Zuul to the top of his list, and we're hoping we'll have them within a
 week.

 With this approach, wedging the gate is highly unlikely. However as we
 won't be testing every check test again in gate, it means there is a
 possibility that a combination of patches might make the check results
 wedge for everyone (like pg job gets wedged). So it moves that issue
 around. Right now it's hard to say 

Re: [openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-21 Thread John Griffith
On Tue, Jan 21, 2014 at 11:14 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Jan 17, 2014 12:24 AM, Flavio Percoco fla...@redhat.com wrote:

 On 16/01/14 17:32 -0500, Doug Hellmann wrote:

 On Thu, Jan 16, 2014 at 3:19 PM, Ben Nemec openst...@nemebean.com
 wrote:

On 2014-01-16 13:48, John Griffith wrote:

Hey Everyone,

A review came up today that cherry-picked a specific commit to
 OSLO
Incubator, without updating the rest of the files in the module.
 I
rejected that patch, because my philosophy has been that when you
update/pull from oslo-incubator it should be done as a full sync
 of
the entire module, not a cherry pick of the bits and pieces that
 you
may or may not be interested in.

As it turns out I've received a bit of push back on this, so it
 seems
maybe I'm being unreasonable, or that I'm mistaken in my
 understanding
of the process here.  To me it seems like a complete and total
 waste
to have an oslo-incubator and common libs if you're going to turn
around and just cherry pick changes, but maybe I'm completely out
 of
line.

Thoughts??


I suppose there might be exceptions, but in general I'm with you.  For
 one
thing, if someone tries to pull out a specific change in the Oslo
 code,
there's no guarantee that code even works.  Depending on how the sync
 was
done it's possible the code they're syncing never passed the Oslo unit
tests in the form being synced, and since unit tests aren't synced to
 the
target projects it's conceivable that completely broken code could get
through Jenkins.

Obviously it's possible to do a successful partial sync, but for the
 sake
of reviewer sanity I'm -1 on partial syncs without a _very_ good
 reason
(like it's blocking the gate and there's some reason the full module
 can't
be synced).


 I agree. Cherry picking a single (or even partial) commit really should
 be
 avoided.

 The update tool does allow syncing just a single module, but that should
 be
 used very VERY carefully, especially because some of the changes we're
 making
 as we work on graduating some more libraries will include cross-dependent
 changes between oslo modules.


 Agrred. Syncing on master should be complete synchornization from Oslo
 incubator. IMHO, the only case where cherry-picking from oslo should
 be allowed is when backporting patches to stable branches. Master
 branches should try to keep up-to-date with Oslo and sync everything
 every time.

 When we started Oslo incubator, we treated that code as trusted. But since
 then there have been occasional issues when syncing the code. So Oslo
 incubator code has lost *my* trust. Therefore I am always a hesitant to do a
 full Oslo sync because I am not an expert on the Oslo code and I risk
 breaking something when doing it (the issue may not appear 100% of the time
 too). Syncing code in becomes the first time that code is run against
 tempest, which scares me.

Understood and agreed, but frankly this defeats the intended purpose
IMO.  If we're going to go this route and never make them true libs
commonly shared/used then we're not gaining anything at all.

We might as well go back to maintaining our own versions in each
project and copying/pasting fixes around.  In essence that's exactly
what you end up with in this situation.

 I would like to propose having a integration test job in Oslo incubator that
 syncs in the code, similar to how we do global requirements.

 Additionally, what about a periodic jenkins job that does the Oslo syncs and
 is managed by the Oslo team itself?

Sure, that's a cool idea... once we get through an initial sync I
think it's doable.  As you've pointed out however there's some
challenges with projects that have been doing cherry picks or just
ignoring updates for any length of time.



 With that in mind, I'd like to request project's members to do
 periodic syncs from Oslo incubator. Yes, it is tedious, painful and
 sometimes requires more than just syncing, but we should all try to
 keep up-to-date with Oslo. The main reason why I'm asking this is
 precisely stable branches. If the project stays way behind the
 oslo-incubator, it'll be really painful to backport patches to stable
 branches in case of failures.

I'd agree with this for sure


 Unfortunately, there are projects that are quite behind from
 oslo-incubator master.

 One last comment. FWIW, backwards compatibility is always considered
 in all Oslo reviews and if there's a crazy-breaking change, it's
 always notified.

I agree there are challenges here, some due to being too far out of
date etc.  I also agree that a lot goes in to backward compat,
however, I think the emphasis on this and what drives a significant
number of the changes is Nova specific.  I also think that's where
most people's focus stops, I don't think there's significant checking
or testing across other projects

Re: [openstack-dev] Cinder unit test failure

2014-01-21 Thread John Griffith
On Tue, Jan 21, 2014 at 4:06 PM, Jay Pipes jaypi...@gmail.com wrote:
 On Tue, 2014-01-21 at 08:29 +0530, iKhan wrote:
 I am worried which one is better in terms of performance? iniparse or
 ConfigParser?

 I am aware iniparse will do a better job of maintaining INI file's
 structure, but I am more interested in performance.

 The parsing of INI files is the last thing you should be worried about
 from a performance perspective. It's not as if your project is built
 around the constant parsing and writing of INI files. It's done once on
 startup. Who cares about the performance of this?

 Best,
 -jay


Have to second Jay's comments, I haven't seen what you're doing but if
you're constantly parsing an INI file lord help us :)

I'd really focus on getting the unit tests in and verifying that the
tempest tests all work properly.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread John Griffith
On Mon, Jan 20, 2014 at 10:07 AM, iKhan ik.ibadk...@gmail.com wrote:
 Hi,

 I have imported iniparse to my cinder code, it works fine when I perform
 execution. But when I run the unit test, it fails while importing iniparse.
 It says No module named iniparse. Do I have to take care of something
 here?

 --
 Thanks,
 Ibad Khan
 9686594607

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


It sounds like it's not installed on your system.  You'd need to do a
pip install iniparse, but if you're adding this to your unit tests
you'll need to have a look at the common test-requires file.  Also
keep in mind if your driver is going to rely on it you'll need it in
requirements.  We can work through the details via IRC if you like.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread John Griffith
On Mon, Jan 20, 2014 at 10:30 AM, iKhan ik.ibadk...@gmail.com wrote:
 Thanks John,

 It worked earlier while executing because iniparse was installed, tho this
 wasn't present in virtual environment. Installing iniparse via pip did work.
 Since I didn't install iniparse specifically, I was under impression it was
 there by default. Probably now I have to take care of this in
 test-requirement.txt as you mentioned.

 I wonder if there is an alternative to iniparse by default.

 Regards


 On Mon, Jan 20, 2014 at 10:47 PM, John Griffith
 john.griff...@solidfire.com wrote:

 On Mon, Jan 20, 2014 at 10:07 AM, iKhan ik.ibadk...@gmail.com wrote:
  Hi,
 
  I have imported iniparse to my cinder code, it works fine when I perform
  execution. But when I run the unit test, it fails while importing
  iniparse.
  It says No module named iniparse. Do I have to take care of something
  here?
 
  --
  Thanks,
  Ibad Khan
  9686594607
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 It sounds like it's not installed on your system.  You'd need to do a
 pip install iniparse, but if you're adding this to your unit tests
 you'll need to have a look at the common test-requires file.  Also
 keep in mind if your driver is going to rely on it you'll need it in
 requirements.  We can work through the details via IRC if you like.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,
 Ibad Khan
 9686594607

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


there is check out openstack.common.iniparser, not sure if it'll fit
your needs or not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder unit test failure

2014-01-20 Thread John Griffith
On Mon, Jan 20, 2014 at 11:15 AM, John Griffith
john.griff...@solidfire.com wrote:
 On Mon, Jan 20, 2014 at 10:30 AM, iKhan ik.ibadk...@gmail.com wrote:
 Thanks John,

 It worked earlier while executing because iniparse was installed, tho this
 wasn't present in virtual environment. Installing iniparse via pip did work.
 Since I didn't install iniparse specifically, I was under impression it was
 there by default. Probably now I have to take care of this in
 test-requirement.txt as you mentioned.

 I wonder if there is an alternative to iniparse by default.

 Regards


 On Mon, Jan 20, 2014 at 10:47 PM, John Griffith
 john.griff...@solidfire.com wrote:

 On Mon, Jan 20, 2014 at 10:07 AM, iKhan ik.ibadk...@gmail.com wrote:
  Hi,
 
  I have imported iniparse to my cinder code, it works fine when I perform
  execution. But when I run the unit test, it fails while importing
  iniparse.
  It says No module named iniparse. Do I have to take care of something
  here?
 
  --
  Thanks,
  Ibad Khan
  9686594607
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 It sounds like it's not installed on your system.  You'd need to do a
 pip install iniparse, but if you're adding this to your unit tests
 you'll need to have a look at the common test-requires file.  Also
 keep in mind if your driver is going to rely on it you'll need it in
 requirements.  We can work through the details via IRC if you like.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,
 Ibad Khan
 9686594607

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 there is check out openstack.common.iniparser, not sure if it'll fit
 your needs or not.
DOH!!  Disregard that

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 1:15 AM, Flavio Percoco fla...@redhat.com wrote:
 On 16/01/14 17:32 -0500, Doug Hellmann wrote:

 On Thu, Jan 16, 2014 at 3:19 PM, Ben Nemec openst...@nemebean.com wrote:

On 2014-01-16 13:48, John Griffith wrote:

Hey Everyone,

A review came up today that cherry-picked a specific commit to OSLO
Incubator, without updating the rest of the files in the module.  I
rejected that patch, because my philosophy has been that when you
update/pull from oslo-incubator it should be done as a full sync of
the entire module, not a cherry pick of the bits and pieces that
 you
may or may not be interested in.

As it turns out I've received a bit of push back on this, so it
 seems
maybe I'm being unreasonable, or that I'm mistaken in my
 understanding
of the process here.  To me it seems like a complete and total
 waste
to have an oslo-incubator and common libs if you're going to turn
around and just cherry pick changes, but maybe I'm completely out
 of
line.

Thoughts??


I suppose there might be exceptions, but in general I'm with you.  For
 one
thing, if someone tries to pull out a specific change in the Oslo code,
there's no guarantee that code even works.  Depending on how the sync
 was
done it's possible the code they're syncing never passed the Oslo unit
tests in the form being synced, and since unit tests aren't synced to
 the
target projects it's conceivable that completely broken code could get
through Jenkins.

Obviously it's possible to do a successful partial sync, but for the
 sake
of reviewer sanity I'm -1 on partial syncs without a _very_ good reason
(like it's blocking the gate and there's some reason the full module
 can't
be synced).


 I agree. Cherry picking a single (or even partial) commit really should be
 avoided.

 The update tool does allow syncing just a single module, but that should
 be
 used very VERY carefully, especially because some of the changes we're
 making
 as we work on graduating some more libraries will include cross-dependent
 changes between oslo modules.


 Agrred. Syncing on master should be complete synchornization from Oslo
 incubator. IMHO, the only case where cherry-picking from oslo should
 be allowed is when backporting patches to stable branches. Master
 branches should try to keep up-to-date with Oslo and sync everything
 every time.

 With that in mind, I'd like to request project's members to do
 periodic syncs from Oslo incubator. Yes, it is tedious, painful and
 sometimes requires more than just syncing, but we should all try to
 keep up-to-date with Oslo. The main reason why I'm asking this is
 precisely stable branches. If the project stays way behind the

Fully agree here, it's something we started in Cinder but sort of died
off and met some push-back (some of that admittedly was from myself at
the beginning).  It is something that we need to look at again though,
if nothing else to prevent falling so far behind that when we do need
a fix/update it's not a monumental undertaking to make it happen.

 oslo-incubator, it'll be really painful to backport patches to stable
 branches in case of failures.

 Unfortunately, there are projects that are quite behind from
 oslo-incubator master.

 One last comment. FWIW, backwards compatibility is always considered
 in all Oslo reviews and if there's a crazy-breaking change, it's
 always notified.

 Thankfully, this all will be alleviated with the libs that are being
 pulled out from the incubator. The syncs will contain fewer modules
 and will be smaller.


 I'm happy you brought this up now. I was meaning to do it.

 Cheers,
 FF


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
robe...@robertcollins.net wrote:
 On 16 January 2014 14:51, John Griffith john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:41 PM, Michael Still mi...@stillhq.com wrote:
 John -- I agree with you entirely here. My concern is more that I
 think the CI tests need to run more frequently than weekly.

 Completely agree, but I guess in essence to start these aren't really
 CI tests.  Instead it's just a public health report for the various
 drivers vendors provide.  I'd love to see a higher frequency, but some
 of us don't have the infrastructure to try and run a test against
 every commit.  Anyway, I think there's HUGE potential for growth and
 adjustment as we go along.  I'd like to get something in place to
 solve the immediate problem first though.

 You say you don't have the infrastructure - whats missing? What if you
 only ran against commits in the cinder trees?

Maybe this is going a bit sideways, but my point was that making a
first step of getting periodic runs on vendor gear and publicly
submitting those results would be a good starting point and a
SIGNIFICANT improvement over what we have today.

It seems to me that requiring every vendor to have a deployment in
house dedicated and reserved 24/7 might be a tough order right out of
the gate.  That being said, of course I'm willing and able to do that
for my employer, but feedback from others hasn't been quite so
amiable.

The feedback here seems significant enough that maybe gating every
change is the way to go though.  I'm certainly willing to opt in to
that model and get things off the ground.  I do have a couple of
concerns (number 3 begin the most significant):

1. I don't want ANY commit/patch waiting for a Vendors infrastructure
to run a test.  We would definitely need a timeout mechanism or
something along those lines to ensure none of this disrupts the gate

2. Isolating this to changes in Cinder seems fine, the intent was
mostly a compatability / features check.  This takes it up a notch and
allows us to detect when something breaks right away which is
certainly a good thing.

3. Support and maintenance is a concern here.  We have a first rate
community that ALL pull together to make our gating and infrastructure
work in OpenStack.  Even with that it's still hard for everybody to
keep up due to number of project and simply the volume of patches that
go in on a daily basis.  There's no way I could do my regular jobs
that I'm already doing AND maintain my own fork/install of the
OpenStack gating infrastructure.

4. Despite all of the heavy weight corporation throwing resource after
resource at OpenStack, keep in mind that it is an Open Source
community still.  I don't want to do ANYTHING that would make it some
unfriendly to folks who would like to commit.  Keep in mind that
vendors here aren't necessarily all large corporations, or even all
paid for proprietary products.  There are open source storage drivers
for example in Cinder and they may or may not have any of the
resources to make this happen but that doesn't mean they should not be
allowed to have code in OpenStack.

The fact is that the problem I see is that there are drivers/devices
that flat out don't work and end users (heck even some vendors that
choose not to test) don't know this until they've purchased a bunch of
gear and tried to deploy their cloud.  What I was initially proposing
here was just a more formal public and community representation of
whether a device works as it's advertised or not.

Please keep in mind that my proposal here was a first step sort of
test case.  Rather than start with something HUGE like deploying the
OpenStack CI in every vendors lab to test every commit (and Im sorry
for those that don't agree but that does seem like a SIGNIFICANT
undertaking), why not take incremental steps to make things better and
learn as we go along?



 To be honest I'd even be thrilled just to see every vendor publish a
 passing run against each milestone cut.  That in and of itself would
 be a huge step in the right direction in my opinion.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 6:24 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 18 January 2014 06:42, John Griffith john.griff...@solidfire.com wrote:
 On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
 robe...@robertcollins.net wrote:

 Maybe this is going a bit sideways, but my point was that making a
 first step of getting periodic runs on vendor gear and publicly
 submitting those results would be a good starting point and a
 SIGNIFICANT improvement over what we have today.

 It seems to me that requiring every vendor to have a deployment in
 house dedicated and reserved 24/7 might be a tough order right out of
 the gate.  That being said, of course I'm willing and able to do that
 for my employer, but feedback from others hasn't been quite so
 amiable.

 The feedback here seems significant enough that maybe gating every
 change is the way to go though.  I'm certainly willing to opt in to
 that model and get things off the ground.  I do have a couple of
 concerns (number 3 begin the most significant):

 1. I don't want ANY commit/patch waiting for a Vendors infrastructure
 to run a test.  We would definitely need a timeout mechanism or
 something along those lines to ensure none of this disrupts the gate

 2. Isolating this to changes in Cinder seems fine, the intent was
 mostly a compatability / features check.  This takes it up a notch and
 allows us to detect when something breaks right away which is
 certainly a good thing.

 3. Support and maintenance is a concern here.  We have a first rate
 community that ALL pull together to make our gating and infrastructure
 work in OpenStack.  Even with that it's still hard for everybody to
 keep up due to number of project and simply the volume of patches that
 go in on a daily basis.  There's no way I could do my regular jobs
 that I'm already doing AND maintain my own fork/install of the
 OpenStack gating infrastructure.

 4. Despite all of the heavy weight corporation throwing resource after
 resource at OpenStack, keep in mind that it is an Open Source
 community still.  I don't want to do ANYTHING that would make it some
 unfriendly to folks who would like to commit.  Keep in mind that
 vendors here aren't necessarily all large corporations, or even all
 paid for proprietary products.  There are open source storage drivers
 for example in Cinder and they may or may not have any of the
 resources to make this happen but that doesn't mean they should not be
 allowed to have code in OpenStack.

 The fact is that the problem I see is that there are drivers/devices
 that flat out don't work and end users (heck even some vendors that
 choose not to test) don't know this until they've purchased a bunch of
 gear and tried to deploy their cloud.  What I was initially proposing
 here was just a more formal public and community representation of
 whether a device works as it's advertised or not.

 Please keep in mind that my proposal here was a first step sort of
 test case.  Rather than start with something HUGE like deploying the
 OpenStack CI in every vendors lab to test every commit (and Im sorry
 for those that don't agree but that does seem like a SIGNIFICANT
 undertaking), why not take incremental steps to make things better and
 learn as we go along?

 Certainly - I totally agree that anything  nothing. I was asking
 about your statement of not having enough infra to get a handle on
 what would block things. As you know, tripleo is running up a

Sorry, got carried away and didn't really answer your question about
resources clearly.  My point about resources was in terms of
man-power, dedicated hardware, networking and all of the things that
go along with spinning up tests on every commit and archiving the
results.  I would definitely like to do this, but first I'd like to
see something that every backend driver maintainer can do at least at
each milestone.

 production quality test cloud to test tripleo, Ironic and once we get
 everything in place - multinode gating jobs. We're *super* interested
 in making the bar to increased validation as low as possible.

We should chat in IRC about approaches here and see if we can align.
For the record HP's resources are vastly different than say a small
start up storage vendor or an open-source storage software stack.

By the way, maybe you can point me to what tripleo is doing, looking
in gerrit I see the jenkins gate noop and the docs job but that's
all I'm seeing?


 I broadly agree with your points 1 through 4, of course!

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Bottom line I appreciate your feedback and comments, it's generated
some new thoughts for me to ponder over the week-end on this subject.

Thanks,
John

___
OpenStack-dev

[openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-16 Thread John Griffith
Hey Everyone,

A review came up today that cherry-picked a specific commit to OSLO
Incubator, without updating the rest of the files in the module.  I
rejected that patch, because my philosophy has been that when you
update/pull from oslo-incubator it should be done as a full sync of
the entire module, not a cherry pick of the bits and pieces that you
may or may not be interested in.

As it turns out I've received a bit of push back on this, so it seems
maybe I'm being unreasonable, or that I'm mistaken in my understanding
of the process here.  To me it seems like a complete and total waste
to have an oslo-incubator and common libs if you're going to turn
around and just cherry pick changes, but maybe I'm completely out of
line.

Thoughts??

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-15 Thread John Griffith
On Wed, Jan 15, 2014 at 11:25 AM, Alan Kavanagh
alan.kavan...@ericsson.com wrote:
 Cheers Guys



 So what would you recommend Oleg. Yes its for linux system.



 /Alan



 From: Oleg Gelbukh [mailto:ogelb...@mirantis.com]
 Sent: January-15-14 10:30 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [ironic] Disk Eraser





 On Wed, Jan 15, 2014 at 6:42 PM, Alexei Kornienko
 alexei.kornie...@gmail.com wrote:

 If you are working on linux system following can help you:

 dd if=/dev/urandom of=/dev/sda bs=4k



 I would not recommend that as /dev/urandom is real slow (10-15 MB/s).



 --

 Best regards,

 Oleg Gelbukh




 :)
 Best Regards,



 On 01/15/2014 04:31 PM, Alan Kavanagh wrote:

 Hi fellow OpenStackers



 Does anyone have any recommendations on open source tools for disk
 erasure/data destruction software. I have so far looked at DBAN and disk
 scrubber and was wondering if ironic team have some better recommendations?



 BR

 Alan



 ___

 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


For better or worse, the LVM driver in cinder currently uses /dev/zero
(same dd method described above).  It's not without it's performance
issues but much faster than using /dev/random or shred etc.

It gets the job done and is probably the best compromise between
performance and security.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev][Cinder] Unable to run cinder_driver_cert.sh

2014-01-15 Thread John Griffith
A number of folks have contacted me and stated that they couldn't get
the newly added cinder certification script to run.  I looked into it
this morning, sdague pointed out that tempest/run_tests.sh was
modified a little while back and it turns out that was the source of
the problem.

I've logged a defect against devstack here [1] and submitted a patch
to update the script to work with the latest version of tempest.
Sorry about the confusion here.  Let shout at me on IRC if you're
still having problems.

Thanks,
John

[1]: https://bugs.launchpad.net/devstack/+bug/1269531

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread John Griffith
Hey Everyone,

A while back I started talking about this idea of requiring Cinder
driver contributors to run a super simple cert script (some info here:
[1]).  Since then I've been playing with introduction of a third party
gate check here in my own lab.  My proposal was to have a non-voting
check that basically duplicates the base devstack gate test in my lab,
but uses different back-end devices that I have available configured
in Cinder to run periodic tests against.  Long term I'd like to be
able to purpose this gear to also do something more useful for the
over all OpenStack gating effort but to start it's strictly an
automated verification of my Cinder driver/backend.

What I'm questioning is how to report this information and the
results.  Currently patches and reviews are our mechanism for
triggering tests and providing feedback.  Myself and many other
vendors that might like to participate in something like this
obviously don't have the infrastructure to try and run something like
this on every single commit.  Also since it would be non-voting it's
difficult to capture and track the results.

One idea that I had was to set something like what I've described
above to run locally on a periodic basis (weekly, nightly etc) and
publish results to something like a third party verification
dashboard.  So the idea would be that results from various third
party tests would all adhere to a certain set of criteria WRT what
they do and what they report  and those results would be logged and
tracked publicly for anybody in the OpenStack community to access and
view?

Does this seem like something that others would be interested in
participating in?  I think it's extremely valuable for projects like
Cinder that have dozens of backend devices, and regardless of other
interest or participation in the community I intend to implement
something like this on my own regardless.  It would just be
interesting to see if we could have an organized and official effort
to gather this sort of information and run these types of tests.

Open to suggestions and thoughts as well as any of you that may
already be doing this sort of thing.  By the way, I've been looking at
things like SmokeStack and other third party gating checks to get some
ideas as well.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread John Griffith
On Wed, Jan 15, 2014 at 5:39 PM, Sankarshan Mukhopadhyay
sankarshan.mukhopadh...@gmail.com wrote:
 On Thu, Jan 16, 2014 at 3:58 AM, John Griffith
 john.griff...@solidfire.com wrote:
 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).

 Could you provide the link which [1] refers to?


Sorry about that:
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022925.html


 --
 sankarshan mukhopadhyay
 https://twitter.com/#!/sankarshan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread John Griffith
On Wed, Jan 15, 2014 at 6:41 PM, Michael Still mi...@stillhq.com wrote:
 John -- I agree with you entirely here. My concern is more that I
 think the CI tests need to run more frequently than weekly.

Completely agree, but I guess in essence to start these aren't really
CI tests.  Instead it's just a public health report for the various
drivers vendors provide.  I'd love to see a higher frequency, but some
of us don't have the infrastructure to try and run a test against
every commit.  Anyway, I think there's HUGE potential for growth and
adjustment as we go along.  I'd like to get something in place to
solve the immediate problem first though.

To be honest I'd even be thrilled just to see every vendor publish a
passing run against each milestone cut.  That in and of itself would
be a huge step in the right direction in my opinion.


 Michael

 On Thu, Jan 16, 2014 at 9:30 AM, John Griffith
 john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:03 PM, Michael Still mi...@stillhq.com wrote:
 On Thu, Jan 16, 2014 at 6:28 AM, John Griffith
 john.griff...@solidfire.com wrote:
 Hey Everyone,

 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).  Since then I've been playing with introduction of a third party
 gate check here in my own lab.  My proposal was to have a non-voting
 check that basically duplicates the base devstack gate test in my lab,
 but uses different back-end devices that I have available configured
 in Cinder to run periodic tests against.  Long term I'd like to be
 able to purpose this gear to also do something more useful for the
 over all OpenStack gating effort but to start it's strictly an
 automated verification of my Cinder driver/backend.

 What I'm questioning is how to report this information and the
 results.  Currently patches and reviews are our mechanism for
 triggering tests and providing feedback.  Myself and many other
 vendors that might like to participate in something like this
 obviously don't have the infrastructure to try and run something like
 this on every single commit.  Also since it would be non-voting it's
 difficult to capture and track the results.

 One idea that I had was to set something like what I've described
 above to run locally on a periodic basis (weekly, nightly etc) and
 publish results to something like a third party verification
 dashboard.  So the idea would be that results from various third
 party tests would all adhere to a certain set of criteria WRT what
 they do and what they report  and those results would be logged and
 tracked publicly for anybody in the OpenStack community to access and
 view?

 My concern here is how to identify what patch broke the third party
 thing. If you run this once a week, then there are possible hundreds
 of patches which might be responsible. How do you identify which one
 is the winner?

 To be honest I'd like to see more than once a week, however the main
 point of this is to have public testing of third party drivers.
 Currently we say it's in trunk and passed review and unit tests so
 you're good to go.  Frankly that's not sufficient, there needs to be
 some sort of testing publicly that shows that a product/config
 actually works in the minimum sense at least.  This won't address
 things like a bad patch breaking things, but again in Cinder's case
 this is a bit different, it is designed more to show compatibility and
 integration completeness.  If a patch goes in and breaks a vendors
 driver but not the reference implementation, that means the vendor has
 work to do bring their driver up to date.

 Cinder is not a dumping ground, the drivers in the code base should no
 be static but require continued maintenance and development as the
 project grows.

 Non-Voting tests on every patch seems unrealistic, however there's no
 reason that if vendors have the resources they couldn't do that if
 they so choose.


 Does this seem like something that others would be interested in
 participating in?  I think it's extremely valuable for projects like
 Cinder that have dozens of backend devices, and regardless of other
 interest or participation in the community I intend to implement
 something like this on my own regardless.  It would just be
 interesting to see if we could have an organized and official effort
 to gather this sort of information and run these types of tests.

 Open to suggestions and thoughts as well as any of you that may
 already be doing this sort of thing.  By the way, I've been looking at
 things like SmokeStack and other third party gating checks to get some
 ideas as well.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev

[openstack-dev] [OpenStack-Dev][Cinder] Cinder driver maintainers/contact wiki

2014-01-11 Thread John Griffith
Hey Cinder Team!

One of the things that's getting increasingly difficult as we grow the
number of drivers in the tree and I try to get the driver cert
initiative kicked off is rounding up an expert for each of the
drivers in the tree.  I've started a simple wiki page / matrix [1]
that is designed to show the driver/vendor name and the contact info
for folks that are designated managers of each of those drivers as
well as any additional engineering resources that might be available.

If you're a Cinder team member, and especially if you're a vendor
contributing to Cinder have a look and help flush out the chart.  This
helps me with a number of things including:
1. Tracking down help when I'm mucking around trying to fix bugs in
other peoples drivers
2. Who to contact when somebody on the team needs help understanding
specifics about a driver
3. Who to assign work items to when dealing with a driver
4. Who to contact for driver cert submissions
5. Public place for folks that are implementing OpenStack to see what
they're getting in for (ie does somebody from company X even
participate/support this code any more)

Thanks,
John

[1]: https://wiki.openstack.org/wiki/Cinder/driver-maintainers

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-06 Thread John Griffith
On Mon, Jan 6, 2014 at 7:59 PM, Christopher Yeoh cbky...@gmail.com wrote:
 On Tue, Jan 7, 2014 at 7:50 AM, Jon Bernard jbern...@tuxion.com wrote:

 Hello all,

 I would like to propose instance-level snapshots as a feature for
 inclusion in Nova.  An initial draft of the more official proposal is
 here [1], blueprint is here [2].

 In a nutshell, this feature will take the existing create-image
 functionality a few steps further by providing the ability to take
 a snapshot of a running instance that includes all of its attached
 volumes.  A coordinated snapshot of multiple volumes for backup
 purposes.  The snapshot operation should occur while the instance is in
 a paused and quiesced state so that each volume snapshot is both
 consistent within itself and with respect to its sibling snapshots.

 I still have some open questions on a few topics:

 * API changes, two different approaches come to mind:

   1. Nova already has a command `createImage` for creating an image of an
  existing instance.  This command could be extended to take an
  additional parameter `all-volumes` that signals the underlying code
  to capture all attached volumes in addition to the root volume.  The
  semantic here is important, `createImage` is used to create
  a template image stored in Glance for later reuse.  If the primary
  intent of this new feature is for backup only, then it may not be
  wise to overlap the two operations in this way.  On the other hand,
  this approach would introduce the least amount of change to the
  existing API, requiring only modification of an existing command
  instead of the addition of an entirely new one.

   2. If the feature's primary use is for backup purposes, then a new API
  call may be a better approach, and leave `createImage` untouched.
  This new call could be called `createBackup` and take as a parameter
  the name of the instance.  Although it introduces a new member to the
  API reference, it would allow this feature to evolve without
  introducing regressions in any existing calls.  These two calls could
  share code at some point in the future.


 Note there already is a createBackup/create_backup API call implemented in
 the admin_actions
 extension (in V3 API it is being separated into its own extension
 https://review.openstack.org/#/c/62280/)
 It doesn't do the all volumes snapshot that you want though. There's a small
 window (basically end of icehouse) to make an incompatible change in the V3
 API if that would be the best way to do it.

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


In general seems reasonable but there are some things that I think
should be considered (comments below).  Also I would point out that if
persistent instances is what folks are looking for there's already
mechanisms in place to do boot from cinder volume which has come a
long way in the past release.

From Cinder's perspective we've always discouraged using snapshots as
backups, particularly in the case of the LVM driver, snapshots have
some significant performance impacts on the parent volume and having
multiple snaps of a volume sitting around long term isn't such a great
idea.  Much of this problem is eliminated via the use of LVM Thin,
however that's not available on all platforms yet, and there's a
consideration of existing installs that should be considered.  So that
would mean something like your proposal would be snapshot---backup,
but unfortunately there's a heavy price to pay when you decide to
build the instance and have to go through the restore.

This exact scenario is why folks use things like bootable cinder
volumes as templates.  There are a number of production cases in which
bootable volumes are created as templates, then those templates can be
cloned, booted, and used then thrown away when a user is done with
them.  If they want to spin the environment back up again, simply
clone the volume again and repeat.  This also eliminates the need to
download/build the image every time (granted some backends do
efficient, fast clones better than others).

For the most part we've viewed cinder snapshots as tools to do things
like create backups, clones, migration operations or create-image
without having to take volumes offline for long periods of time.  I
think there are good reasons to continue with this strategy even if
it's not the most popular with some.

I'd like to make sure we look into this a bit before making a quick
submission/change.  Particularly I think we need to consider what our
long term goals/use model is.

Thanks,
John

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2013-12-31 Thread John Griffith
Hey Everyone,

I wanted to see where we stand on IDE extensions in .gitignore files.
We seem to have some back and forth, one cycle there's a bug and a
patch to add things like eclipse, idea etc and the next there's a bug
and a patch to remove them.  I'd like to have some sort of consensus
on what we want here.  I personally don't have a preference, I would
just like to have consistency and quit thrashing back and forth.

Anyway, I'd like to see all of the projects agree on this... or even
consider moving to a global .gitignore.  Thoughts??

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2013-12-31 Thread John Griffith
On Tue, Dec 31, 2013 at 10:07 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Tue, Dec 31, 2013 at 8:45 AM, John Griffith john.griff...@solidfire.com
 wrote:

 Hey Everyone,

 I wanted to see where we stand on IDE extensions in .gitignore files.
 We seem to have some back and forth, one cycle there's a bug and a
 patch to add things like eclipse, idea etc and the next there's a bug
 and a patch to remove them.  I'd like to have some sort of consensus
 on what we want here.  I personally don't have a preference, I would
 just like to have consistency and quit thrashing back and forth.

 Anyway, I'd like to see all of the projects agree on this... or even
 consider moving to a global .gitignore.  Thoughts??


 I am not sure if this is the global .gitignore you are thinking of but this
 is the one I am in favor of:

 https://help.github.com/articles/ignoring-files#global-gitignore

Yep



 Maintaining .gitignore in 30+ repositories for a potentially infinite number
 of editors is very hard, and thankfully we have an easier way to do it.

Exactly




 John


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2013-12-31 Thread John Griffith
On Tue, Dec 31, 2013 at 3:33 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 1 January 2014 06:07, Joe Gordon joe.gord...@gmail.com wrote:




 I am not sure if this is the global .gitignore you are thinking of but this
 is the one I am in favor of:

 https://help.github.com/articles/ignoring-files#global-gitignore


 Maintaining .gitignore in 30+ repositories for a potentially infinite number
 of editors is very hard, and thankfully we have an easier way to do it.

 This is a strawman argument: noone (that I know of) has proposed
 adding all editors to all repositories. There are in reality a few
 very common editors and having their extensions present in per
 repository .gitignores does absolutely *no harm*. There is no reason
 not to have sane and sensible defaults in our repositories.

 If we are wasting time adding and removing patterns, then I think that
 counts as a harm, so it is a sensible discussion to have to come to a
 project standard, but the standard should be inclusive and useful, not
 just useful for power users that have everything setup 'just so'. Many
 contributors are using git for the first time when they contribute to
 OpenStack, and getting git setup correctly is itself daunting [for new
 users].

 So I'm very much +1 on having tolerance for the top 5-10 editor
 patterns in our .gitignores, -1 on *ever* having a bug open to change
 this in any repository, and getting on with our actual task here of
 writing fantastic code.

 If folk *really* don't want editor files in .gitignore (and given the
 complete lack of harm I would -really- like a explanation for this
 mindset) then we could solve the problem more permanently: we know
 what files need to be added - *.rst, *.py, *.ini, [!.]* and a few
 others. Everything else is junk and shouldn't be added. By
 whitelisting patterns w
e will support all editors except those whose
 working file names match names we'd genuinely want to add.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 If we are wasting time adding and removing patterns, then I think that
 counts as a harm, so it is a sensible discussion to have to come to a
 project standard, but the standard should be inclusive and useful, not
 just useful for power users that have everything setup 'just so'. Many
 contributors are using git for the first time when they contribute to
 OpenStack, and getting git setup correctly is itself daunting [for new
 users].

My point exactly is that this is creating churn and there is some back
and forth (see links to LP items below).  Like I said, I don't have an
objection, I just want to be consistent and move on.  This has come up
in commits in past releases as well.  As I said, I see little harm in
having them present, however I see significant harm in racking up
commits to take them in and out as well as the ugliness in having
inconsistent policies in different projects.

https://bugs.launchpad.net/ceilometer/+bug/1256043
https://bugs.launchpad.net/trove/+bug/1257279
https://bugs.launchpad.net/python-cinderclient/+bug/1255876

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2013-12-28 Thread John Griffith
On Sat, Dec 28, 2013 at 8:57 AM, Clint Byrum cl...@fewbar.com wrote:
 Hi Phil. Thanks for the well reasoned and poignant message urging
 caution and forethought in change management. I agree with all of the
 sentiments and think that we can do better in reasoning about the impact
 of changes. I think this just puts further exposure on the fact that
 Nova needs reviewers desperately so that reviewers can slow down.

 However, I think this is primarily an exposure in our gate testing. If
 there are older OS's we want to be able to support, we should be booting
 them in the gate and testing that the ephemeral disk works on them. What
 is a cloud that can't boot workloads?

 While our ability to reason is a quite effective way to stop emergent
 problems, we know these are precious and scarce resources, and thus
 we should use mechanical methods before falling back to reviewers and
 developers.

 So, I'd suggest that we add a test that the ephemeral disk mounts in
 any desired OS's to tempest. If that is infeasible (due to nested KVM
 in the gate being slllo) then I'm afraid I don't have a solution.

 Excerpts from Day, Phil's message of 2013-12-28 07:21:16 -0800:
 Hi Folks,

 I know it may seem odd to be arguing for slowing down a part of the review 
 process, but I'd like to float the idea that there should be a minimum 
 review period for patches that change existing functionality in a way that 
 isn't backwards compatible.

 The specific change that got me thinking about this is 
 https://review.openstack.org/#/c/63209/ which changes the default fs type 
 from ext3 to ext4.I agree with the comments in the commit message that 
 ext4 is a much better filesystem, and it probably does make sense to move to 
 that as the new default at some point, however there are some old OS's that 
 may still be in use that don't support ext4.  By making this change to the 
 default without any significant notification period this change has the 
 potential to brake existing images and snapshots.  It was already possible 
 to use ext4 via existing configuration values, so there was no urgency to 
 this change (and no urgency implied in the commit messages, which is neither 
 a bug or blueprint).

 I'm not trying to pick out the folks involved in this change in particular, 
 it just happened to serve as a good and convenient example of something that 
 I think we need to be more aware of and think about having some specific 
 policy around.  On the plus side the reviewers did say they would wait 24 
 hours to see if anyone objected, and the actual review went over 4 days - 
 but I'd suggest that is still far too quick even in a non-holiday period for 
 something which is low priority (the functionality could already be achieved 
 via existing configuration options) and which is a change in default 
 behaviour.  (In the period around a major holiday there probable needs to be 
 an even longer wait). I know there are those that don't want to see 
 blueprints for every minor functional change to the system, but maybe this 
 is a case where a blueprint being proposed and reviewed may have caught the 
 impact of the change.With a number of people now using a continual 
 deployment approach any cha
 n
  ge in default behaviour needs to be considered not just  for the benefits it 
 brings but what it might break.  The advantage we have as a community is that 
 there are lot of different perspectives that can be brought to bear on the 
 impact of functional changes, but we equally have to make sure there is 
 sufficient time for those perspectives to emerge.

 Somehow it feels that we're getting the priorities on reviews wrong when a 
 low priority changes like this which can  go through in a matter of days, 
 when there are bug fixes such as https://review.openstack.org/#/c/57708/ 
 which have been sitting for over a month with a number of +1's which don't 
 seem to be making any progress.

 Cheers,
 Phil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think Sean made some good recommendations in the review (waiting 24
hours as well as suggesting ML etc).  It seems that cases like this
don't necessarily need mandated time requirements for review but just
need good core reviewers to say hey, this is a big deal... we should
probably get some feedback here etc.

One thing I am curious about however, Gary made a good point about
using the default_ephemeral_format= config setting to make this
pretty easy and straight forward.  I didn't see any other responses to
that, and it looks like the patch still uses a default of none.
Quick look at the code it seems like this would be a clean way to go
about things, any reason why this wasn't discussed further?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Devstack Ceph

2013-12-26 Thread John Griffith
On Tue, Dec 24, 2013 at 4:49 AM, Sebastien Han
sebastien@enovance.com wrote:
 Hello everyone,

 I’ve been working on a new feature for Devstack that includes a native 
 support for Ceph.
 The patch includes the following:

 * Ceph installation (using the ceph.com repo)
 * Glance integration
 * Cinder integration (+ nova virsh secret)
 * Cinder backup integration
 * Partial Nova integration since master is currently broken. Lines are 
 already there, the plan is to un-comment those lines later.
 * Everything works with Cephx (Ceph authentification system).

 Does anyone will be interested to see this going into Devstack mainstream?


I'm likely in the minority here, but personally I don't like the idea
of adding every driver/backend combination option directly in
devstack.  The issue I think you're trying to solve here is exactly
what the driver_cert scripts and result submission are intended to
address.

If there's interest in creating an additional gating job that uses RBD
as the backend that's in my mind a different discussion and definitely
worth having.

 Cheers.

 
 Sébastien Han
 Cloud Engineer

 Always give 100%. Unless you're giving blood.”

 Phone: +33 (0)1 49 70 99 72
 Mail: sebastien@enovance.com
 Address : 10, rue de la Victoire - 75009 Paris
 Web : www.enovance.com - Twitter : @enovance


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-23 Thread John Griffith
On Thu, Dec 5, 2013 at 8:38 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/04/2013 12:10 PM, Russell Bryant wrote:

 On 12/04/2013 11:16 AM, Nikola Đipanov wrote:

 Resurrecting this thread because of an interesting review that came up
 yesterday [1].

 It seems that our lack of a firm decision on what to do with the mocking
 framework has left people confused. In hope to help - I'll give my view
 of where things are now and what we should do going forward, and
 hopefully we'll reach some consensus on this.

 Here's the breakdown:

 We should abandon mox:
 * It has not had a release in over 3 years [2] and a patch upstream for 2
 * There are bugs that are impacting the project with it (see above)
 * It will not be ported to python 3

 Proposed path forward options:
 1) Port nova to mock now:
* Literally unmanageable - huge review overhead and regression risk
 for not so much gain (maybe) [1]

 2) Opportunistically port nova (write new tests using mock, when fixing
 tests, move them to mock):
   * Will take a really long time to move to mock, and is not really a
 solution since we are stuck with mock for an undetermined period of time
 - it's what we are doing now (kind of).

 3) Same as 2) but move current codebase to mox3
   * Buys us py3k compat, and fresher code
   * Mox3 and mox have diverged and we would need to backport mox fixes
 onto the mox3 three and become de-facto active maintainers (as per Peter
 Feiner's last email - that may not be so easy).

 I think we should follow path 3) if we can, but we need to:

 1) Figure out what is the deal with mox3 and decide if owning it will
 really be less trouble than porting nova. To be hones - I was unable to
 even find the code repo for it, only [3]. If anyone has more info -
 please weigh in. We'll also need volunteers

 2) Make better testing guidelines when using mock, and maybe add some
 testing helpers (like we do already have for mox) that will make porting
 existing tests easier. mreidem already put this on this weeks nova
 meeting agenda - so that might be a good place to discuss all the issues
 mentioned here as well.

 We should really take a stronger stance on this soon IMHO, as this comes
 up with literally every commit.


 I think option 3 makes the most sense here (pending anyone saying we
 should run away screaming from mox3 for some reason).  It's actually
 what I had been assuming since this thread a while back.


 What precisely is the benefit of moving the existing code to mox3 versus
 moving the existing code to mock? Is mox3 so similar to mox that the
 transition would be minimal?


 This means that we don't need to *require* that tests get converted if
 you're changing one.  It just gets you bonus imaginary internet points.

 Requiring mock for new tests seems fine.  We can grant exceptions in
 specific cases if necessary.  In general, we should be using mock for
 new tests.


 My vote would be to use mock for everything new (no brainer), keep old mox
 stuff around and slowly port it to mock. I see little value in bringing in
 another mox3 library, especially if we'd end up having to maintain it.

FWIW this is exactly what the Cinder team agreed upon a while back and
the direction we've been going.  There hasn't really been any
push-back on this and in most cases the response from people has been
Wow, using mock was so much easier/straight forward.


 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][cinder] Driver certification ideas

2013-12-20 Thread John Griffith
Hey Everyone,

So we merged the super simple driver cert test script in to devstack a
while back.  For those that aren't familiar you can check it out here
[1].  First iteration of this is simply a do it yourself config and
run that goes through the same volume-tests that every cinder patch
runs through the gate.

There's definitely room for growth here but this seems like a
reasonable first step.  The remaining question here is how do we want
to use this?  I've made a coupe of suggestions that I'd like to review
and get some feed-back.  To be clear this can obviously evolve over
time, but I'd like to start somewhat simple, try it out and build off
of if depending on how things go.  So with that here's a couple of
options I've been considering:

1. File a bug in launchpad:
This bug would be for tracking purposes, it would be something like
no cert results available for driver-X.  This would require that the
driver maintainer download/install devstack, configure their driver
and backend and then run the supplied script.

The next question is what to do with the results, there are some options here:
  a. Take the resultant tgz file and post it into the bug report as an
attachment.  Assuming everything passes the bug can then be marked as
closed/resolved.
  b. Create a repo (or even a directory in the Cinder tree) that
includes results files.  That way the bug is logged and a gerrit
commit referencing the bug id is submitted and reviewed very similar
to how we handle source changes.

Option 'a' is path of least resistance, however it becomes a very
manual process and it's somewhat ugly.  Option 'b' fits more with how
we operate anyway, and provides some automation, and it also leaves a
record of the cert process in the tree that makes visibility and
tracking much easier.



2.  Create a web/wiki page specifically for this information:
This would basicly be a matrix of the drivers, and the current status
of the cert results for the current iteration.  It would be something
like a row for every driver in the tree and a column for last cycle
and current cycle.  We'd basically set it up so that the
current-cycle entries are all listed as not submitted after the
milestone is cut.  The current entries in that column would roll back
to the last-cycle column.  Then the driver maintainer could
run/update the matrix at any time during that cycle.

The only thing with this is again it's very manual in terms of
tracking, might be a bit confusing (this may make perfect sense to me,
but seems like jibberish to others :)), and we'd want to have a
repository to store the results files for people to reference.

I'm open to ideas/suggestions here keeping in mind that the initial
purpose is to provide publicly viewable information as to the
integration status of the drivers in the Cinder project.  This would
help people building OpenStack clouds to make sure that the backend
devices they may be choosing actually implement all of the base
features and that they actually work.  Vendors can of course choose
not to participate, that just tells consumers beware, vendor-a
doesn't necessarily care all that much, or doesn't have time to test
this.

Anyway, hopefully this makes sense, if more clarification is needed I
can try and clean up my descriptions a bit.

Thanks,
John

[1]: 
https://github.com/openstack-dev/devstack/blob/master/driver_certs/cinder_driver_cert.sh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-18 Thread John Griffith
On Wed, Dec 18, 2013 at 8:35 AM, Duncan Thomas duncan.tho...@gmail.com wrote:
 04:00 or 05:00 UTC would basically preclude European participation for
 most people... that's 4 am for Dosaboy and myself for example.

 Alternating meetings on different weeks would probably work, though we
 would need to encourage people to get stuff on the agenda in advance
 rather than an hour before the meeting, so that people can send their
 comments ahead if they can't attend.

 On 17 December 2013 17:03, Walter A. Boring IV walter.bor...@hp.com wrote:
 4 or 5 UTC works better for me.   I can't attend the current meeting
 time, due to taking my kids to school in the morning at 1620UTC

 Walt

 Hi All,

 Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
 some interest in either changing the weekly Cinder meeting time, or
 proposing a second meeting to accomodate folks in other time-zones.

 A large number of folks are already in time-zones that are not
 friendly to our current meeting time.  I'm wondering if there is
 enough of an interest to move the meeting time from 16:00 UTC on
 Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
 willing to look at either moving the meeting for a trial period or
 holding a second meeting to make sure folks in other TZ's had a chance
 to be heard.

 Let me know your thoughts, if there are folks out there that feel
 unable to attend due to TZ conflicts and we can see what we might be
 able to do.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Good feedback, thanks everyone for the input.  I have to say I am
beginning to feel a bit like trying to solve a problem that doesn't
exist.  I'll think about this some more and see what we come up with.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] weekly meeting

2013-12-16 Thread John Griffith
Hi All,

Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
some interest in either changing the weekly Cinder meeting time, or
proposing a second meeting to accomodate folks in other time-zones.

A large number of folks are already in time-zones that are not
friendly to our current meeting time.  I'm wondering if there is
enough of an interest to move the meeting time from 16:00 UTC on
Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
willing to look at either moving the meeting for a trial period or
holding a second meeting to make sure folks in other TZ's had a chance
to be heard.

Let me know your thoughts, if there are folks out there that feel
unable to attend due to TZ conflicts and we can see what we might be
able to do.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread John Griffith
On Mon, Dec 16, 2013 at 8:57 PM, 赵钦 chaoc...@gmail.com wrote:
 Hi John,

 I think the current meeting schedule, UTC 16:00, basically works for China
 TZ (12AM), although it is not perfect. If we need to reschedule, I think UTC
 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our lunch
 time.


 On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
 john.griff...@solidfire.com wrote:

 Hi All,

 Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
 some interest in either changing the weekly Cinder meeting time, or
 proposing a second meeting to accomodate folks in other time-zones.

 A large number of folks are already in time-zones that are not
 friendly to our current meeting time.  I'm wondering if there is
 enough of an interest to move the meeting time from 16:00 UTC on
 Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
 willing to look at either moving the meeting for a trial period or
 holding a second meeting to make sure folks in other TZ's had a chance
 to be heard.

 Let me know your thoughts, if there are folks out there that feel
 unable to attend due to TZ conflicts and we can see what we might be
 able to do.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Chaochin,

Thanks for the feedback, I think the alternate time would have to be
moved up an hour or two anyway (between the lunch hour in your TZ and
the fact that it just moves the problem of being at midnight to the
folks in US Eastern TZ).  Also, I think if there is interest that a
better solution might be to implement something like the Ceilometer
team does and alternate the time each week.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] suggestion to a new driver

2013-12-14 Thread John Griffith
Hi Ronan,
Best advice I would give is start with the base driver class (
cinder.volume.driver.py) and the reference LVM driver (cinder
.volume.drivers.lvm.py).  Those will give you a template of the interfaces,
args needed and return values.

Also jump in IRC at #openstack-cinder and we can chat more there as well.

Thanks,
John
On Dec 10, 2013 2:04 AM, Ronen Angluster ro...@reduxio.com wrote:

 Hello all!

 we're developing a new storage appliance and per one of our customers
 would like
 to build a cinder driver.
 i kept digging into the documentation for the past 2 weeks and could not
 find anything that described the code level of API. i.e. nothing describes
 what each function should
 receive and what it should return.
 is there a document that describe it and i missed it? if not, who can
 provide that missing information?

 Ronen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread John Griffith
On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  :-)

NO please (please please please).  We have enough grammar reviewers
at this point already IMO and I honestly think I might puke if jenkins
fails my patch because I didn't put a '.' at the end of my comment
line in the code.  I'd much rather see us focus on things like... I
dunno... maybe having the code actually work?


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread John Griffith
On Tue, Dec 3, 2013 at 11:54 AM, Nachi Ueno na...@ntti3.com wrote:
 2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?

 yeah, but may be non-voting reviews by this tool is helpful

Fair enough... don't get me wrong I'm all for support of non-english
contributors etc.  I just think that the emphasis on grammar and
punctuation in reviews has gotten a bit out of hand as of late.  FWIW
I've never -1'd a patch (and never would) because somebody used its
rather than it's in a comment.  Or they didn't end a comment (NOT a
docstring) with a period.  I think it's the wrong place to spend
effort quite honestly.

That being said, I realize people will continue to this sort of thing
(it's very important to get your -1 counts in the review stats) and
admittedly there is some value to spelling and grammar.  I just feel
that there are *real* issues and bugs that people could spend this
time that would actually have some significant and real benefit.

I'm obviously in the minority on this topic so I should probably just
yield at this point and get on board the grammar train.





 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread John Griffith
On Tue, Dec 3, 2013 at 12:18 PM, Nachi Ueno na...@ntti3.com wrote:
 2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:54 AM, Nachi Ueno na...@ntti3.com wrote:
 2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  
 :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?

 yeah, but may be non-voting reviews by this tool is helpful

 Fair enough... don't get me wrong I'm all for support of non-english
 contributors etc.  I just think that the emphasis on grammar and
 punctuation in reviews has gotten a bit out of hand as of late.  FWIW
 I've never -1'd a patch (and never would) because somebody used its
 rather than it's in a comment.  Or they didn't end a comment (NOT a
 docstring) with a period.  I think it's the wrong place to spend
 effort quite honestly.

 That being said, I realize people will continue to this sort of thing
 (it's very important to get your -1 counts in the review stats) and
 admittedly there is some value to spelling and grammar.  I just feel
 that there are *real* issues and bugs that people could spend this
 time that would actually have some significant and real benefit.

 I'm obviously in the minority on this topic so I should probably just
 yield at this point and get on board the grammar train.

 May be, this is off topic.
 At first, I do agree the importance of such grammar error is not high.
 We should focus on real issues.

 However IMO, we should -1 for even such cases (using its)

 I just send patch for fixing misspells in neutron.
 https://review.openstack.org/#/c/59809/

 There was 50 misspells. so it is may be small mistakes for one patch,
 but it will be growing..

Ok, point taken... I'll be quiet on the subject now :)






 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread John Griffith
On Tue, Dec 3, 2013 at 1:05 PM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 01:46 PM, John Griffith wrote:
 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?

 Ha.  I asked as a joke and I totally agree with your sentiment here.
 But actually, the way to prevent these types of reviews/patches is to
 prevent the errors from happening in the first place.  If you look at
 what this is doing, it's really not so bad.  It's not touching grammar.
  It's not even trying to be an all encompassing spell checker.  It's
 just looking for specific commonly misspelled words.  It doesn't sound
 that bad to me.

Yep, sounds great.


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder related gate failures

2013-11-27 Thread John Griffith
Hey Everyone.

There's been discussions about all sorts of things regarding getting
visibility to intermittent issues in the gates etc. including special
tags for bugs, making them critical etc.  Regardless of what the
outcome of those discussions might be going forward I've been going
through and marking cinder bugs that were reported as a result of gate
failures with the tag gate-failure.

If you think of it if/when you find a bug as a result of a gate test
failure going forward feel free to add that tag to help me keep an eye
on these and track things a bit better.  This should also help with
weeding out duplicates a bit.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] curious, why wasn't nova commit 52f6981 backported to grizzly?

2013-11-27 Thread John Griffith
On Wed, Nov 27, 2013 at 4:44 PM, Chris Friesen
chris.frie...@windriver.com wrote:
 52f6981

It appears there were some complications doing a straight backport
based on the LaunchPad notes [1], don't have any insight for you other
than that.

[1]: https://bugs.launchpad.net/nova/+bug/1156269

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova][social-apects] Social aspects shouldn't impact on dev process

2013-11-20 Thread John Griffith
On Wed, Nov 20, 2013 at 5:17 PM, Tom Fifield t...@openstack.org wrote:
 Hi Boris,

 I'm sorry that you've had a frustrating experience :)

 Even I've written purge scripts before - so I know that this is a very
 useful feature :)

 I think that some of it was probably just due to timing issues - I've
 observed that things behave a bit differently than normal around release and
 the summit. Some specific thoughts in-line.


 On 20/11/13 18:06, Boris Pavlovic wrote:

 We started working on purge engine for DB (before HK summit)

 This is very important, because at this moment we don't have any working
 way to purge DB... so admins should make it by hand.


 And we made this BP (in october)
 https://blueprints.launchpad.net/nova/+spec/db-purge-engine

 And made patch that makes this work.
 But only because our BP wasn't approved we got -2 from Joe Gordon.
 (https://review.openstack.org/#/c/51523/ ) And there was long discussion
 to remove this -2.


 I've had a read of the review discussion, which (specifically related to the
 -2) was over 1 day, with a total of 10 messages.

 It seems to me that Joe's initial -2 was valid - he was just working to
 prevent the patch getting accidentally merged before it was ready.

 I think your update of the commit message and tagging as WIP was a nice
 compromise response, and was clear enough to remove the -2.

 However - my guess is that the specific -2 isn't the underlying issue here.
 Instead, it's about the way having a -2 on the patch changes
 how reviewers see it. Still guessing: my impression is that you might see a
 -2 on a patch as a death knell, where reviewers just stop looking at the
 patch thinking it's a dead end.

 Personally, I don't spend enough time on nova reviews to say whether this is
 the case. Have you observed this? perhaps someone else can chip in ?


 And now after summit David Ripton made the similar BP (probably he
 didn't know):
 https://blueprints.launchpad.net/nova/+spec/db-purge2


 The merging of these two efforts looks like something we can fix, yes?


 Regards,

 Tom



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Just to clarify, based on what I'm following here; the issue is not
the -2 really.  The issue is that a -2 was given due to the submitted
BP not being approved.  However the same reviewer then approved a
duplicate BP that was submitted after the fact.

I'm sure this was an oversight and just a matter of things being busy
and loosing track of BP's.  The original BP is well detailed IMO and
seems like it should've been approved so folks could move on with the
patch that's in process.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-19 Thread John Griffith
On Mon, Nov 18, 2013 at 3:53 PM, Mark McLoughlin mar...@redhat.com wrote:
 On Mon, 2013-11-18 at 17:24 +, Duncan Thomas wrote:
 Random OSLO updates with no list of what changed, what got fixed etc
 are unlikely to get review attention - doing such a review is
 extremely difficult. I was -2ing them and asking for more info, but
 they keep popping up. I'm really not sure what the best way of
 updating from OSLO is, but this isn't it.

 Best practice is to include a list of changes being synced, for example:

   https://review.openstack.org/54660

 Every so often, we throw around ideas for automating the generation of
 this changes list - e.g. cinder would have the oslo-incubator commit ID
 for each module stored in a file in git to tell us when it was last
 synced.

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Been away on vacation so I'm afraid I'm a bit late on this... but;

I think the point Duncan is bringing up here is that there are some
VERY large and significant patches coming from OSLO pulls.  The DB
patch in particular being over 1K lines of code to a critical portion
of the code is a bit unnerving to try and do a review on.  I realize
that there's a level of trust that goes with the work that's done in
OSLO and synchronizing those changes across the projects, but I think
a few key concerns here are:

1. Doing huge pulls from OSLO like the DB patch here are nearly
impossible to thoroughly review and test.  Over time we learn a lot
about real usage scenarios and the database and tweak things as we go,
so seeing a patch set like this show up is always a bit unnerving and
frankly nobody is overly excited to review it.

2. Given a certain level of *trust* for the work that folks do on the
OSLO side in submitting these patches and new additions, I think some
of the responsibility on the review of the code falls on the OSLO
team.  That being said there is still the issue of how these changes
will impact projects *other* than Nova which I think is sometimes
neglected.  There have been a number of OSLO synchs pushed to Cinder
that fail gating jobs, some get fixed, some get abandoned, but in
either case it shows that there wasn't any testing done with projects
other than Nova (PLEASE note, I'm not referring to this particular
round of patches or calling any patch set out, just stating a
historical fact).

3. We need better documentation in commit messages explaining why the
changes are necessary and what they do for us.  I'm sorry but in my
opinion the answer it's the latest in OSLO and Nova already has it
is not enough of an answer in my opinion.  The patches mentioned in
this thread in my opinion met the minimum requirements because they at
least reference the OSLO commit which is great.  In addition I'd like
to see something to address any discovered issues or testing done with
the specific projects these changes are being synced to.

I'm in no way saying I don't want Cinder to play nice with the common
code or to get in line with the way other projects do things but I am
saying that I think we have a ways to go in terms of better
communication here and in terms of OSLO code actually keeping in mind
the entire OpenStack eco-system as opposed to just changes that were
needed/updated in Nova.  Cinder in particular went through some pretty
massive DB re-factoring and changes during Havana and there was a lot
of really good work there but it didn't come without a cost and the
benefits were examined and weighed pretty heavily.  I also think that
some times the indirection introduced by adding some of the
openstack.common code is unnecessary and in some cases makes things
more difficult than they should be.

I'm just not sure that we always do a very good ROI investigation or
risk assessment on changes, and that opinion applies to ALL changes in
OpenStack projects, not OSLO specific or anything else.

All of that being said, a couple of those syncs on the list are
outdated.  We should start by doing a fresh pull for these and if
possible add some better documentation in the commit messages as to
the justification for the patches that would be great.  We can take a
closer look at the changes and the history behind them and try to get
some review progress made here.  Mark mentioned some good ideas
regarding capturing commit ID's from synchronization pulls and I'd
like to look into that a bit as well.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [style] () vs \ continuations

2013-11-14 Thread John Griffith
On Thu, Nov 14, 2013 at 10:03 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Nov 14, 2013 6:58 AM, Dolph Mathews dolph.math...@gmail.com wrote:


 On Wed, Nov 13, 2013 at 6:46 PM, Robert Collins
 robe...@robertcollins.net wrote:

 Hi so - in http://docs.openstack.org/developer/hacking/

 it has as bullet point 4:
 Long lines should be wrapped in parentheses in preference to using a
 backslash for line continuation.

 I'm seeing in some reviews a request for () over \ even when \ is
 significantly clearer.

 I'd like us to avoid meaningless reviewer churn here: can we either:
  - go with PEP8 which also prefers () but allows \ when it is better
- and reviewers need to exercise judgement when asking for one or
 other
  - make it a hard requirement that flake8 detects


 +1 for the non-human approach.

 Humans are a bad match for this type of review work, sounds like we will
 have to add this into hacking 0.9




 My strong recommendation is to go with PEP8 and exercising of judgement.

 The case that made me raise this is this:
 folder_exists, file_exists, file_size_in_kb, disk_extents = \
 self._path_file_exists(ds_browser, folder_path, file_name)

 Wrapping that in brackets gets this;
 folder_exists, file_exists, file_size_in_kb, disk_extents = (
 self._path_file_exists(ds_browser, folder_path, file_name))


 The root of the problem is that it's a terribly named method with a
 terrible return value... fix the underlying problem.



 Which is IMO harder to read - double brackets, but no function call,
 and no tuple: it's more ambiguous than \.

 from
 https://review.openstack.org/#/c/48544/15/nova/virt/vmwareapi/vmops.py

 Cheers,
 Rob
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

personally I don't see the big deal here, I think there can be some
judgement etc.  BUT it seems to me that this is an awful waste of
time.

Just automate it one way or the other and let reviewers actually focus
on something useful.  Frankly I could care less about line separation
and am much more concerned about bugs being introduced via patches
that reviewers didn't catch.  That's ok though, at least the line
continuations were correct.

Sorry, I shouldn't be a jerk but we seem to have rather pointless
debates as of late (spelling/grammar in comments etc etc).  IMO we
should all do our best on these things but really the focus here
should be on the technical components of the code.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread John Griffith
On Wed, Nov 13, 2013 at 7:21 AM, Andrew Laski
andrew.la...@rackspace.com wrote:
 On 11/13/13 at 05:48am, Gary Kotton wrote:

 I recall a few cycles ago having str(uuid.uuid4()) replaced by
 generate_uuid(). There was actually a helper function in neutron (back when
 it was called quantum) and it was replaced. So now we are going back…
 I am not in favor of this change.


 I'm also not really in favor of it.  Though it is a trivial method having it
 in oslo implies that this is what uuids should look like across OpenStack
 projects.  And I'm in favor of consistency for uuids across the projects
 because the same parsers and checkers can then be used for input validation
 or log parsing.


 From: Zhongyue Luo zhongyue@intel.commailto:zhongyue@intel.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org

 Date: Wednesday, November 13, 2013 8:07 AM
 To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org

 Subject: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

 Hi all,

 We had a discussion of the modules that are incubated in Oslo.


 https://etherpad.openstack.org/p/icehouse-oslo-statushttps://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.org/p/icehouse-oslo-statusk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=63eaa20d8c94217d86793a24379b4391179fbfa1fb2c961fb37a5512dbdff69a


 One of the conclusions we came to was to deprecate/remove uuidutils in
 this cycle.

 The first step into this change should be to remove generate_uuid() from
 uuidutils.

 The reason is that 1) generating the UUID string seems trivial enough to
 not need a function and 2) string representation of uuid4 is not what we
 want in all projects.

 To address this, a patch is now on gerrit.
 https://review.openstack.org/#/c/56152/https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/56152/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=adb860d11d1ad02718e306b9408c603daa00970685a208db375a9ec011f13978


 Each project should directly use the standard uuid module or implement its
 own helper function to generate uuids if this patch gets in.

 Any thoughts on this change? Thanks.

 --
 Intel SSG/STO/DCST/CIT
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
 China
 +862161166500


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Trivial or not, people use it and frankly I don't see any value at all
in removing it.  As far as the some projects want a different format
of UUID that doesn't make a lot of sense to me but if that's what
somebody wants they should write their own method.  I strongly agree
with others with respect to the comments around code-churn.  I see
little value in this.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread John Griffith
On Wed, Nov 13, 2013 at 9:02 AM, Julien Danjou jul...@danjou.info wrote:
 On Wed, Nov 13 2013, John Griffith wrote:

 Trivial or not, people use it and frankly I don't see any value at all
 in removing it.  As far as the some projects want a different format
 of UUID that doesn't make a lot of sense to me but if that's what
 somebody wants they should write their own method.  I strongly agree
 with others with respect to the comments around code-churn.  I see
 little value in this.

 The thing is that code in oslo-incubator is supposed to be graduated to
 standalone Python library.

 We see little value in a library providing a library for a helper doing
 str(uuid.uuid4()).

Well I see your point, probably should've never been there in the
first place :)  Although I suppose it is good to have some form of
standarization for something no matter how trivial.  Anyway, my
opinion is it seems like unnecessary churn but I do see your point.  I
can modify it in Cinder easy enough and won't complain (too much
more), but I'm also wondering how many *other* things might fall in to
this category.


 --
 Julien Danjou
 /* Free Software hacker * independent consultant
http://julien.danjou.info */

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to recognize indirect contributions to our code base

2013-11-13 Thread John Griffith
On Wed, Nov 13, 2013 at 5:14 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 11/11/2013 12:44 PM, Daniel P. Berrange wrote:

 On Mon, Nov 11, 2013 at 03:20:20PM +0100, Nicolas Barcet wrote:

 Dear TC members,

 Our companies are actively encouraging our respective customers to have
 the
 patches they mission us to make be contributed back upstream.  In order
 to
 encourage this behavior from them and others, it would be nice that if
 could gain some visibility as sponsors of the patches in the same way
 we
 get visibility as authors of the patches today.

 The goal here is not to provide yet another way to count affiliations of
 direct contributors, nor is it a way to introduce sales pitches in
 contrib.
   The only acceptable and appropriate use of the proposal we are making
 is
 to signal when a patch made by a contributor for another comany than the
 one he is currently employed by.

 For example if I work for a company A and write a patch as part of an
 engagement with company B, I would signal that Company B is the sponsor
 of
 my patch this way, not Company A.  Company B would under current
 circumstances not get any credit for their indirect contribution to our
 code base, while I think it is our intent to encourage them to
 contribute,
 even indirectly.

 To enable this, we are proposing that the commit text of a patch may
 include a
 sponsored-by: sponsorname
 line which could be used by various tools to report on these commits.
   Sponsored-by should not be used to report on the name of the company
 the
 contributor is already affiliated to.

 We would appreciate to see your comments on the subject and eventually
 get
 your approval for it's use.


 IMHO, lets call this what it is: marketing.

 I'm fine with the idea of a company wanting to have recognition for work
 that they fund. They can achieve this by putting out a press release or
 writing a blog post saying that they funded awesome feature XYZ to bring
 benefits ABC to the project on their own websites, or any number of other
 marketing approaches. Most / many companies and individuals contributing
 to OpenStack in fact already do this very frequently which is fine /
 great.

 I don't think we need to, nor should we, add anything to our code commits,
 review / development workflow / toolchain to support such marketing
 pitches.
 The identities recorded in git commits / gerrit reviewes / blueprints etc
 should exclusively focus on technical authorship, not sponsorship. Leave
 the marketing pitches for elsewhere.


 I agree with Daniel here. There's nothing wrong with marketing, and there's
 nothing wrong with a company promoting the funding that it contributed to
 get some feature written or high profile bug fixed. But, I don't believe
 this marketing belongs in the commit log. In the open source community,
 *individuals* develop and contribute code, not companies. And I'm not
 talking about joint contribution agreements, like the corporate CLA. I'm
 talking about the actual work that is performed by developers, technical
 documentation folks, QA folks, etc. Source control should be the domain of
 the individual, not the company.


Well said

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-12 Thread John Griffith
On Tue, Nov 12, 2013 at 8:46 AM, Solly Ross sr...@redhat.com wrote:
 I'd like to get some sort of consensus on this before I start working on it.  
 Now that people are back from Summit, what would you propose?

 Best Regards,
 Solly Ross

 - Original Message -
 From: Solly Ross sr...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, November 5, 2013 10:40:48 AM
 Subject: Re: [openstack-dev] Improvement of Cinder API wrt 
 https://bugs.launchpad.net/nova/+bug/1213953

 Also, that's still an overly complicated process for one or two VMs.  The 
 idea behind the Nova command was to minimize the steps in the 
 image-volume-VM process for a single VM.

 - Original Message -
 From: Chris Friesen chris.frie...@windriver.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, November 5, 2013 9:23:39 AM
 Subject: Re: [openstack-dev] Improvement of Cinder API wrt  
 https://bugs.launchpad.net/nova/+bug/1213953

 Wouldn't you still need variable timeouts?  I'm assuming that copying
 multi-gig cinder volumes might take a while, even if it's local.  (Or
 are you assuming copy-on-write?)

 Chris

 On 11/05/2013 01:43 AM, Caitlin Bestler wrote:
 Replication of snapshots is one solution to this.

 You create a Cinder Volume once. snapshot it. Then replicate to the
 hosts that need it (this is the piece currently missing). Then you clone
 there.

 I will be giving an in an hour in conference session on this and other
 uses of snapshots in the last time slot Wednesday.

 On Nov 5, 2013 5:58 AM, Solly Ross sr...@redhat.com
 mailto:sr...@redhat.com wrote:

 So,
 There's currently an outstanding issue with regards to a Nova
 shortcut command that creates a volume from an image and then boots
 from it in one fell swoop.  The gist of the issue is that there is
 currently a set timeout which can time out before the volume
 creation has finished (it's designed to time out in case there is an
 error), in cases where the image download or volume creation takes
 an extended period of time (e.g. under a Gluster backend for Cinder
 with certain network conditions).

 The proposed solution is a modification to the Cinder API to provide
 more detail on what exactly is going on, so that we could
 programmatically tune the timeout.  My initial thought is to create
 a new column in the Volume table called 'status_detail' to provide
 more detailed information about the current status.  For instance,
 for the 'downloading' status, we could have 'status_detail' be the
 completion percentage or JSON containing the total size and the
 current amount copied.  This way, at each interval we could check to
 see if the amount copied had changed, and trigger the timeout if it
 had not, instead of blindly assuming that the operation will
 complete within a given amount of time.

 What do people think?  Would there be a better way to do this?

 Best Regards,
 Solly Ross

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think the best solution here is to clean up the setting of
error-status for volumes during create/download and skip the timeout
altogether.  Last time I looked even this wasn't in that bad of shape
(with the exception of the phantom VG doesn't exist that none of us
seem to be able to recreate).  I'm not a fan of complex variable
time-out algorithms, and I'm even less of a fan of adding API
functions to gather timeout info.

I would like to hear if there's actually a solution offered by
call-backs that the rest of us just aren't seeing here?  I don't know
how that solves the problem though.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-12 Thread John Griffith
On Tue, Nov 12, 2013 at 10:25 AM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
 On 11/12/2013 8:09 AM, John Griffith wrote:

 On Tue, Nov 12, 2013 at 8:46 AM, Solly Ross sr...@redhat.com wrote:

 I'd like to get some sort of consensus on this before I start working on
 it.  Now that people are back from Summit, what would you propose?

 Best Regards,
 Solly Ross

 - Original Message -
 From: Solly Ross sr...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Tuesday, November 5, 2013 10:40:48 AM
 Subject: Re: [openstack-dev] Improvement of Cinder API wrt
 https://bugs.launchpad.net/nova/+bug/1213953

 Also, that's still an overly complicated process for one or two VMs.  The
 idea behind the Nova command was to minimize the steps in the
 image-volume-VM process for a single VM.


 Complexity is not an issue. Bandwidth and latency are issues.

 Any solution that achieves the user objectives can be managed by a taskflow.
 It will be simple for the user to apply. The amount of code
 involved is relatively low on the factors to compare. Taking extra time
 and consuming extra bandwidth that were not required are serious issues.

 My assumption is that the cinder backend will be able to employ
 copy-on-write when cloning volumes to at least make a thinly provisioned
 version available almost instantly (even if the full space is allocated and
 then copied asynchronously. Permanently thin clones just require that the
 relationship be tracked. Currently that is up to the volume driver, but we
 could always make these relationships legitimate by recognizing them in
 Cinder proper).

Sorry, but I'm not seeing where you're going with this in relation to
the question being asked?  The question is how to deal with creating a
new bootable volume from nova boot command and be able to tell whether
it's timed out, or errored while waiting for creation.  Not sure I'm
following your solution here, in an ideal scenario yes, if the backend
has a volume with the image already available they could utilize
things like cloning or snapshot features but that's a pretty
significant pre-req and I'm not sure how it relates to the general
problem that's being discussed.


 The goal here is not to require new behaviors of backends, but to enable
 solutions that already exist to be deployed to the benefit of end users.
 Requiring synchronoous multi-GB copies (locally or even worse over the
 network) is not a minor price that we should expect customers to endure
 for the sake of software uniformity.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Policy on spelling and grammar

2013-11-11 Thread John Griffith
On Tue, Nov 12, 2013 at 2:49 AM, James Slagle james.sla...@gmail.com wrote:
 -1 from me as well.

 When I first started with OpenStack, I probably would have agreed with
 letting small grammar mistakes and typos slide by.

 However, I now feel that getting commit messages right is more
 important.  Also keep in mind that with small grammar mistakes, the
 intent may be obvious to a native English speaker, but to another
 non-native English speaker it may not be.  And just a few small
 grammar mistakes/misspellings/typos can add up until the meaning may
 be harder to figure out for another non-native English speaker.

 Also, I can't speak for everyone, but in general I've found most folks
 open to grammar corrections if English is not their native language
 b/c they want to learn and fix the mistakes.


 --
 -- James Slagle
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Guess I'm in the minority with here... some of the nits in commit
messages and comments is a bit extreme.  Sure there are some cases
where I think offering a correction is great/appropriate, but for
example issuing a -1 on somebody's patch because they mixed up their
use of 'there' seems a bit lame.

Seems to me there's a middle ground here, but honestly if you're value
add to the review process is catching grammatical or spelling errors
in comments and commit messages I'd argue that in most cases it would
be nice to have more substantive feedback to go along with it.  I
happen to be a top offender here in terms of grammar or spelling
errors in comments so I'm a bit biased on the topic. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to recognize indirect contributions to our code base

2013-11-11 Thread John Griffith
On Mon, Nov 11, 2013 at 10:44 AM, Daniel P. Berrange
berra...@redhat.com wrote:
 On Mon, Nov 11, 2013 at 03:20:20PM +0100, Nicolas Barcet wrote:
 Dear TC members,

 Our companies are actively encouraging our respective customers to have the
 patches they mission us to make be contributed back upstream.  In order to
 encourage this behavior from them and others, it would be nice that if
 could gain some visibility as sponsors of the patches in the same way we
 get visibility as authors of the patches today.

 The goal here is not to provide yet another way to count affiliations of
 direct contributors, nor is it a way to introduce sales pitches in contrib.
  The only acceptable and appropriate use of the proposal we are making is
 to signal when a patch made by a contributor for another comany than the
 one he is currently employed by.

 For example if I work for a company A and write a patch as part of an
 engagement with company B, I would signal that Company B is the sponsor of
 my patch this way, not Company A.  Company B would under current
 circumstances not get any credit for their indirect contribution to our
 code base, while I think it is our intent to encourage them to contribute,
 even indirectly.

 To enable this, we are proposing that the commit text of a patch may
 include a
sponsored-by: sponsorname
 line which could be used by various tools to report on these commits.
  Sponsored-by should not be used to report on the name of the company the
 contributor is already affiliated to.

 We would appreciate to see your comments on the subject and eventually get
 your approval for it's use.

 IMHO, lets call this what it is: marketing.

 I'm fine with the idea of a company wanting to have recognition for work
 that they fund. They can achieve this by putting out a press release or
 writing a blog post saying that they funded awesome feature XYZ to bring
 benefits ABC to the project on their own websites, or any number of other
 marketing approaches. Most / many companies and individuals contributing
 to OpenStack in fact already do this very frequently which is fine / great.

 I don't think we need to, nor should we, add anything to our code commits,
 review / development workflow / toolchain to support such marketing pitches.
 The identities recorded in git commits / gerrit reviewes / blueprints etc
 should exclusively focus on technical authorship, not sponsorship. Leave
 the marketing pitches for elsewhere.

+1000


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-11-11 Thread John Griffith
Deciding vote from Winston!!  Welcome to Cinder core team Jay!!!  Congrats
and thanks for the hard work and review contributions thus far!
On Nov 11, 2013 7:58 PM, Huang Zhiteng winsto...@gmail.com wrote:

 Hope it's not too late.  +1

 On Wed, Oct 30, 2013 at 11:37 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:
  +1
 
  On 29 October 2013 20:54, John Griffith john.griff...@solidfire.com
 wrote:
  Hey,
 
  I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
  :) ) for core membership on the Cinder team.  Jay has been working on
  Cinder for a while now and has really shown some dedication and
  provided much needed help with quality reviews.  In addition to his
  review activity he's also been very active in IRC and in Cinder
  development as well.
 
  I think he'd be a good add to the core team.
 
  Thanks,
  John
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Duncan Thomas
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Regards
 Huang Zhiteng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] cinder metrics with nova-scheduler

2013-11-08 Thread John Griffith
On Thu, Nov 7, 2013 at 10:51 PM, Abbass MAROUNI
abbass.maro...@virtualscale.fr wrote:
 Hello,

 We want to be able to launch a VM with a number of cinder volumes on the
 same host (the host is a compute and a storage node).
 We're looking for a way to tell nova-scheduler to work with cinder-scheduler
 so that we can filter and weight hosts according to our needs.

 According to the documentation on nova-scheduler filters :

 DiskFilter : Only schedule instances on hosts if there are sufficient Disk
 available for ephemeral storage.

 Is there any implementation of a filter to check the persistence storage on
 a host ?
 Do you think that this feature is doable ?

 Best Regards,

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hello Abbass,

Currently there is not a way to do this, however your request is very
timely as we just discussed this very use case at the summit today and
do have plans to work on it for the Icehouse release.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-05 Thread John Griffith
On Nov 5, 2013 3:33 PM, Avishay Traeger avis...@il.ibm.com wrote:

 So while doubling the timeout will fix some cases, there will be cases
with
 larger volumes and/or slower systems where the bug will still hit.  Even
 timing out on the download progress can lead to unnecessary timeouts (if
 it's really slow, or volume is really big, it can stay at 5% for some
 time).

 I think the proper fix is to make sure that Cinder is moving the volume
 into 'error' state in all cases where there is an error.  Nova can then
 poll as long as its in the 'downloading' state, until it's 'available' or
 'error'.

Agree

 Is there a reason why Cinder would legitimately get stuck in
 'downloading'?

 Thanks,
 Avishay



 From:   John Griffith john.griff...@solidfire.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:   11/05/2013 07:41 AM
 Subject:Re: [openstack-dev] Improvement of Cinder API wrt
 https://bugs.launchpad.net/nova/+bug/1213953



 On Tue, Nov 5, 2013 at 7:27 AM, John Griffith
 john.griff...@solidfire.com wrote:
  On Tue, Nov 5, 2013 at 6:29 AM, Chris Friesen
  chris.frie...@windriver.com wrote:
  On 11/04/2013 03:49 PM, Solly Ross wrote:
 
  So, There's currently an outstanding issue with regards to a Nova
  shortcut command that creates a volume from an image and then boots
  from it in one fell swoop.  The gist of the issue is that there is
  currently a set timeout which can time out before the volume creation
  has finished (it's designed to time out in case there is an error),
  in cases where the image download or volume creation takes an
  extended period of time (e.g. under a Gluster backend for Cinder with
  certain network conditions).
 
  The proposed solution is a modification to the Cinder API to provide
  more detail on what exactly is going on, so that we could
  programmatically tune the timeout.  My initial thought is to create a
  new column in the Volume table called 'status_detail' to provide more
  detailed information about the current status.  For instance, for the
  'downloading' status, we could have 'status_detail' be the completion
  percentage or JSON containing the total size and the current amount
  copied.  This way, at each interval we could check to see if the
  amount copied had changed, and trigger the timeout if it had not,
  instead of blindly assuming that the operation will complete within a
  given amount of time.
 
  What do people think?  Would there be a better way to do this?
 
 
  The only other option I can think of would be some kind of callback
that
  cinder could explicitly call to drive updates and/or notifications of
 faults
  rather than needing to wait for a timeout.  Possibly a combination of
 both
  would be best, that way you could add a --poll option to the create
 volume
  and boot CLI command.
 
  I come from the kernel-hacking world and most things there involve
  event-driven callbacks.  Looking at the openstack code I was kind of
  surprised to see hardcoded timeouts and RPC casts with no callbacks to
  indicate completion.
 
  Chris
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I believe you're referring to [1], which was closed after a patch was
 added to nova to double the timeout length.  Based on comments sounds
 like your still seeing issues on some Gluster (maybe other) setups?

 Rather than mess with the API in order to do debug, why don't you use
 the info in the cinder-logs?

 [1] https://bugs.launchpad.net/nova/+bug/1213953

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-04 Thread John Griffith
On Tue, Nov 5, 2013 at 7:27 AM, John Griffith
john.griff...@solidfire.com wrote:
 On Tue, Nov 5, 2013 at 6:29 AM, Chris Friesen
 chris.frie...@windriver.com wrote:
 On 11/04/2013 03:49 PM, Solly Ross wrote:

 So, There's currently an outstanding issue with regards to a Nova
 shortcut command that creates a volume from an image and then boots
 from it in one fell swoop.  The gist of the issue is that there is
 currently a set timeout which can time out before the volume creation
 has finished (it's designed to time out in case there is an error),
 in cases where the image download or volume creation takes an
 extended period of time (e.g. under a Gluster backend for Cinder with
 certain network conditions).

 The proposed solution is a modification to the Cinder API to provide
 more detail on what exactly is going on, so that we could
 programmatically tune the timeout.  My initial thought is to create a
 new column in the Volume table called 'status_detail' to provide more
 detailed information about the current status.  For instance, for the
 'downloading' status, we could have 'status_detail' be the completion
 percentage or JSON containing the total size and the current amount
 copied.  This way, at each interval we could check to see if the
 amount copied had changed, and trigger the timeout if it had not,
 instead of blindly assuming that the operation will complete within a
 given amount of time.

 What do people think?  Would there be a better way to do this?


 The only other option I can think of would be some kind of callback that
 cinder could explicitly call to drive updates and/or notifications of faults
 rather than needing to wait for a timeout.  Possibly a combination of both
 would be best, that way you could add a --poll option to the create volume
 and boot CLI command.

 I come from the kernel-hacking world and most things there involve
 event-driven callbacks.  Looking at the openstack code I was kind of
 surprised to see hardcoded timeouts and RPC casts with no callbacks to
 indicate completion.

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I believe you're referring to [1], which was closed after a patch was
added to nova to double the timeout length.  Based on comments sounds
like your still seeing issues on some Gluster (maybe other) setups?

Rather than mess with the API in order to do debug, why don't you use
the info in the cinder-logs?

[1] https://bugs.launchpad.net/nova/+bug/1213953

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to newer full projects from what used to be part of nova

2013-10-31 Thread John Griffith
On Thu, Oct 31, 2013 at 10:26 AM, Jesse Pretorius
jesse.pretor...@gmail.com wrote:
 Hi everyone,

 Migrations from Essex to Grizzly/Havana are starting to hit my radar of
 responsible tasks and I'm disappointed that beyond this old wiki note [1]

Is your disappointment that there isn't a path from Essex --
Grizzly/Havana, or are you unhappy with the content?

 and a wealth of questions with very few answers [2], there is very little
 available to support the migrations from what used to be part of nova to the
 newer full projects.

 I really think that as Openstack grows and the projects split out, one of
 the focal areas really needs to be on ensuring that people using the older
 versions can migrate to the newer versions without needing to do all sorts
 of terrible hacks.

 Issues at hand, for now, are:

 1) Migrating from nova-volume to cinder

So to be quite honest we never intended to make skips like you
describe.  Perhaps that wasn't such a good choice in retrospect.  I'm
willing to take a look at putting some special migrations in, but
honestly given the changes in Nova alone between Essex and Folsom
making a full jump from something like Essex to Havana is going to be
challenging not just for adding Cinder, but overall.  I know it's not
ideal but right now the best/preferred solution may in fact be to do
incremental updates as intended.

Going forward maybe we can come up with something better.

 2) Migrating from nova-network to neutron

 It'd be great if we could pool efforts to figure out an effective way of
 handling these migrations. Whether they're handled in the 'db sync' process,
 or by a set of companion utilities instead is immaterial to a deployer... as
 long as something suitable and effective is available to cater for the need.

 [1] https://wiki.openstack.org/wiki/MigrateToCinder
 [2] https://www.google.co.za/search?q=migrate+'nova-network'+quantum

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to skip certain unit tests?

2013-10-30 Thread John Griffith
On Wed, Oct 30, 2013 at 8:02 PM, Vijay B os.v...@gmail.com wrote:
 Hi,

 How can we skip certain unit tests when running run_tests.sh? I'm looking at
 the Openstack unit test page at
 http://docs.openstack.org/developer/nova/devref/unit_tests.html but I cannot
 find info on how to do this. Any idea if there already is a way to do this?
 If not, does the community think it would be helpful to have such a
 facility? I think it would help to have this if certain tests are broken. Of
 course, tests should never be broken and should be fixed right away, but at
 certain times it may not be possible to wait for the tests to be fixed,
 especially if there is an external dependency, and we may want to be able to
 customize automated builds temporarily until the broken tests are fixed.

 Regards,
 Vijay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Vijay,

Theoretically there should never be broken tests in master, that's
what the gates are for and if there are they should be fixed very
quickly.

Back to your question, I don't know of a way to skip from
run_tests.sh, but there is a skip decorator that can be added to tests
in the code.  You can also specify specific tests to run.  Using
run_tests.sh (you can also do more sophisticated things with testr or
tox directly) you could do something like:
'run_tests.sh cinder.tests.test_volumes' or more granular:
'run_tests.sh cinder.tests.test_volume:VolumeTestCase.test_create_delete_volume

Hope that helps.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread John Griffith
On Tue, Oct 29, 2013 at 2:54 PM, John Griffith
john.griff...@solidfire.com wrote:
 Hey,

 I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
 :) ) for core membership on the Cinder team.  Jay has been working on
 Cinder for a while now and has really shown some dedication and
 provided much needed help with quality reviews.  In addition to his
 review activity he's also been very active in IRC and in Cinder
 development as well.

 I think he'd be a good add to the core team.

 Thanks,
 John
For those that would like to just click rather than type:

http://russellbryant.net/openstack-stats/cinder-reviewers-180.txt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Preserving ephemeral block device on rebuild?

2013-10-28 Thread John Griffith
On Mon, Oct 28, 2013 at 4:49 AM, Robert Collins
robe...@robertcollins.netwrote:

 On 28 October 2013 23:17, John Garbutt j...@johngarbutt.com wrote:
  Is there a reason why you could not just use a Cinder Volume for your
  data, in this case?

 Because this is at the baremetal layer; we want local disks - e.g.
 some of the stuff we might put in that partition would be cinder lvm
 volumes for serving out to VM guests. Until we have a cinder
 implementation that can reference the disks in the same baremetal node
 an instance is being deployed to we can't use Cinder. We want that
 implementation, and since it involves nontrivial changes as well as
 cross-service interactions we don't want to do it in nova-baremetal -
 doing it in Ironic is the right place. But, we also don't want to
 block all progress on TripleO until we get that done in Ironic

  While at a first glance, it feels rather wrong, and un-cloudy, I do
  see something useful about refreshing the base disk, and leaving the
  data disks alone. Prehaps it's something that could be described in
  the block device mapping, where you have a local volume that you
  choose to be non-ephemeral, except for server terminate, or something
  like that?

 Yeah. Except I'd like to just use ephemeral for that, since it meets
 all of the criteria already, except that it's detached and recreated
 on rebuild. This isn't a deep opinion though - mainly I don't want to
 invest a bunch of time building something needlessly different to the
 existing facility, which cloud-init and other tools already know how
 to locate and use.

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Personally I'd rather go the proper route and try to get what you need in
to Cinder, FWIW the local storage provisioning is something that Vish
brought up last summit but nobody picked it up.  I'd like to make that
happen early in I.  I'm not familiar with the work-around your proposing
though so maybe it's not an issue, just hate to put in a temporary hack
that will end up likely taking on a life of its own once it lands.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stable backports

2013-10-28 Thread John Griffith
On Mon, Oct 28, 2013 at 7:37 AM, Russell Bryant rbry...@redhat.com wrote:

 On 10/27/2013 07:07 AM, Gary Kotton wrote:
  Hi,
  In the case of stable back ports which have Fixes bug: #XYZ we will
  have to change this to the new format Closes-bug: #XYZ. Any thoughts
  on this?

 It doesn't have to change, right?  The old format is still treated like
 Closes-bug AFAIK.  When doing backports, I would just leave the commit
 message alone.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1 I don't touch commit messages in the backport and have pushed back on
folks that have.  The backport gets what the original patch to master had.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-23 Thread John Griffith
On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague s...@dague.net wrote:

 Dave Kranz has been building a system so that we can ensure that during a
 Tempest run services don't spew ERRORs in the logs. Eventually, we're going
 to gate on this, because there is nothing that Tempest does to the system
 that should cause any OpenStack service to ERROR or stack trace (Errors
 should actually be exceptional events that something is wrong with the
 system, not regular events).


So I have to disagree with the approach being taken here.  Particularly in
the case of Cinder and the negative tests that are in place.  When I read
this last week I assumed you actually meant that Exceptions were
exceptional and nothing in Tempest should cause Exceptions.  It turns out
you apparently did mean Errors.  I completely disagree here, Errors happen,
some are recovered, some are expected by the tests etc.  Having a policy
and especially a gate that says NO ERROR MESSAGE in logs makes absolutely
no sense to me.

Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree with, but
this makes no sense to me.  By the way, here's a perfect example:
https://bugs.launchpad.net/cinder/+bug/1243485

As long as we have Tempest tests that do things like show non-existent
volume you're going to get an Error message and I think that you should
quite frankly.



 Ceilometer is currently one of the largest offenders in dumping ERRORs in
 the gate - http://logs.openstack.org/68/**52768/1/check/check-tempest-**
 devstack-vm-full/76f83a4/**console.html#_2013-10-19_14_**51_51_271http://logs.openstack.org/68/52768/1/check/check-tempest-devstack-vm-full/76f83a4/console.html#_2013-10-19_14_51_51_271(that
  item isn't in our whitelist yet, so you'll see a lot of it at the end
 of every run)

 and http://logs.openstack.org/68/**52768/1/check/check-tempest-**
 devstack-vm-full/76f83a4/logs/**screen-ceilometer-collector.**
 txt.gz?level=TRACEhttp://logs.openstack.org/68/52768/1/check/check-tempest-devstack-vm-full/76f83a4/logs/screen-ceilometer-collector.txt.gz?level=TRACEfor
  full details

 This seems like something is wrong in the integration, and would be really
 helpful if we could get ceilometer eyes on this one to put ceilo into a non
 erroring state.

 -Sean

 --
 Sean Dague
 http://dague.net

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-23 Thread John Griffith
On Wed, Oct 23, 2013 at 8:47 AM, Sean Dague s...@dague.net wrote:

 On 10/23/2013 10:40 AM, John Griffith wrote:




 On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 Dave Kranz has been building a system so that we can ensure that
 during a Tempest run services don't spew ERRORs in the logs.
 Eventually, we're going to gate on this, because there is nothing
 that Tempest does to the system that should cause any OpenStack
 service to ERROR or stack trace (Errors should actually be
 exceptional events that something is wrong with the system, not
 regular events).


 So I have to disagree with the approach being taken here.  Particularly
 in the case of Cinder and the negative tests that are in place.  When I
 read this last week I assumed you actually meant that Exceptions were
 exceptional and nothing in Tempest should cause Exceptions.  It turns
 out you apparently did mean Errors.  I completely disagree here, Errors
 happen, some are recovered, some are expected by the tests etc.  Having
 a policy and especially a gate that says NO ERROR MESSAGE in logs makes
 absolutely no sense to me.

 Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree with, but
 this makes no sense to me.  By the way, here's a perfect example:
 https://bugs.launchpad.net/**cinder/+bug/1243485https://bugs.launchpad.net/cinder/+bug/1243485

 As long as we have Tempest tests that do things like show non-existent
 volume you're going to get an Error message and I think that you should
 quite frankly.


 Ok, I guess that's where we probably need to clarify what Not Found is.
 Because Not Found to me seems like it should be a request at INFO level,
 not ERROR.



 ERROR from an admin perspective should really be something that would
 suitable for sending an alert to an administrator for them to come and fix
 the cloud.

 TRACE is actually a lower level of severity in our log systems than ERROR
 is.


Sorry, by Trace I was referring to unhandled stack/exception trace messages
in the logs.



 -Sean

 --
 Sean Dague
 http://dague.net

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread John Griffith
On Wed, Oct 23, 2013 at 1:03 PM, Clark Boylan clark.boy...@gmail.comwrote:

 On Wed, Oct 23, 2013 at 11:20 AM, Sean Dague s...@dague.net wrote:
  One of the efforts that we're working on from the QA team is tooling that
  ensures we aren't stack tracing into our test logs during normal tempest
  runs. Random stack traces are scary to cloud admins consuming OpenStack
  logs, and exceptions in the logs should really be exceptional events (and
  indicative of a failing system), not something that we do by default. Our
  intent is to gate code on clean logs (no stacktraces) eventually (i.e. if
  you try to land a patch that causes stack traces in OpenStack, that
 becomes
  a failing condition), and we've got an incremental white list based
 approach
  that should let us make forward progress on that. But on that thread -
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/017012.html
  we exposed another issue... across projects, OpenStack is very
 inconsistent
  with logging.
 
  First... baseline, these are the logging levels that we have in OpenStack
  today (with numeric values, higher = worse):
 
  CRITICAL = 50
  FATAL = CRITICAL
  ERROR = 40
  WARNING = 30
  WARN = WARNING
  AUDIT = 21  # invented for oslo-logging
  INFO = 20
  DEBUG = 10
  NOTSET = 0
 
  We also have TRACE, which isn't a level per say, it happens at another
  level. However TRACE is typically an ERROR in the way we use it.
 
 
  Some examples of oddities in the current system (all from a single
  devstack/tempest run):
 
  Example 1:
  ==
 
  n-conductor log in tempest/devstack -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz
 
  Total log lines: 84076
  Total non DEBUG lines: 61
 
  Question: do we need more than 1 level of DEBUG? 3 orders of
 magnitude
  information change between INFO - DEBUG seems too steep a cliff.
 
  Example 2:
  ==
 
  ceilometer-collector -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-ceilometer-collector.txt.gz
 
  AUDIT log level being used as DEBUG level (even though it's higher
 than
  INFO).
 
  2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  meter_pipeline
  2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  cpu_pipeline
  2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  meter_pipeline
  2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  cpu_pipeline
 
  (this is every second, for most seconds, for the entire run)
 
  Example 3:
  ===
 
  cinder-api -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-c-api.txt.gz?level=ERROR
  ERROR level being used for 404s of volumes
 
  Example 4:
  ===
  glance-api -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-g-api.txt.gz
 
  2013-10-23 12:23:27.436 23731 ERROR glance.store.sheepdog [-] Error in
 store
  configuration: Unexpected error while running command.Command: collieExit
  code: 127Stdout: ''Stderr: '/bin/sh: 1: collie: not found\n'
  2013-10-23 12:23:27.436 23731 WARNING glance.store.base [-] Failed to
  configure store correctly: Store sheepdog could not be configured
 correctly.
  Reason: Error in store configuration: Unexpected error while running
  command.Command: collieExit code: 127Stdout: ''Stderr: '/bin/sh: 1:
 collie:
  not found\n' Disabling add method.
 
  part of every single Tempest / Devstack run, even though we aren't
 trying to
  configure sheepdog in the gate
 
 
  I think we can, and should do better, and started trying to brain dump
 into
  this etherpad -
  https://etherpad.openstack.org/p/icehouse-logging-harmonization(examples
  included).
 
  This is one of those topics that I think our current 6 track summit model
  doesn't make easy address, as we really need general concensus across any
  project that's using oslo-logging, so I believe mailing list is the
 better
  option, at least for now.
 
 
  Goals - Short Term
  ===
  As much feedback as possible from both core projects and openstack
 deployers
  about the kinds of things that they believe we should be logging, and the
  kinds of levels they think those things should land at.
 
  Determining how crazy it is to try to harmonize this across services.
 
  Figure out who else wants to help. Where help means:
   * helping figure out what's already concensus in services
   * helping figure out things that are really aberrant from that concensus
   * helping build concensus with various core teams on a common
   * helping with contributions to projects that are interested in
  contributions to move them closer to the concensus
 
  Determining if everyone just hates the idea, and I should give up now. :)
  (That is a valid response to this RFC, feel free to 

Re: [openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-17 Thread John Griffith
On Thu, Oct 17, 2013 at 1:44 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 10/17/2013 03:32 PM, Boris Pavlovic wrote:

 Jay,


 Or, alternately, just have Rally as part of Tempest.


 Actually, tempest is used only to verify that cloud works properly.
 And verification is only small part of the Rally.

 At this moment we are using fuel-ostf-tests, but we are going to use
 tempest to verify cloud.


 OK, cool... was just a suggestion :) Tempest has a set of stress tests [1]
 which are kind of related, which is the only reason I brought it up.

 Best,
 -jay

 [1] 
 https://github.com/openstack/**tempest/tree/master/tempest/**stresshttps://github.com/openstack/tempest/tree/master/tempest/stress


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Actually seems like a pretty good suggestion IMO, at least something worth
some investigation and consideration before quickly discounting it.  Rather
than that's not what tempest is, maybe it's something tempest could do.
 Don't know, not saying one way or the other, just wondering if it's worth
some investigation or thought.

By the way, VERY COOL!!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 8:41 AM, Matt Riedemann mrie...@us.ibm.com wrote:

 Have you looked at the volume_clear and volume_clear_size options in
 cinder.conf?

 *
 https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073
 *https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073

 The default is to zero out the volume.  You could try 'none' to see if
 that helps with performance.



 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
 --
  *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* *mrie...@us.ibm.com* mrie...@us.ibm.com
 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States





 From:cosmos cosmos cosmos0...@gmail.com
 To:openstack-dev@lists.openstack.org,
 Date:10/11/2013 04:26 AM
 Subject:[openstack-dev]  dd performance for wipe in cinder
 --



 Hello.
 My name is Rucia for Samsung SDS.

 Now I am in trouble in cinder volume deleting.
 I am developing for supporting big data storage in lvm

 But it takes too much time for deleting of cinder lvm volume because of dd.
 Cinder volume is 200GB for supporting hadoop master data.
 When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume
 count=100 bs=1M' it takes about 30 minutes.

 Is there the better and quickly way for deleting?

 Cheers.
 Rucia.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 As Matt pointed out there's an option to turn off secure-delete
altogether.  The reason for the volume_clear setting (aka secure delete) is
that since we're allocating volumes via LVM from a shared VG there is the
possibility that a user had a volume with sensitive data and
deleted/removed the logical volume they were using.  If there was no
encryption or if no secure delete operation were performed it is possible
that another tenant when creating a new volume from the Volume Group could
be allocated some of the blocks that the previous volume utilized and
potentially inspect/read those blocks and obtain some of the other users
data.

To be honest the options provided won't likely make this operation as
fast as you'd like, especially when dealing with 200GB volumes.
 Depending on your environment you may want to consider using encryption or
possibly if acceptable using the volume_clear=None.

John
image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball bob.b...@citrix.com wrote:

  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com]
  Sent: 11 October 2013 15:18
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Hyper-V] Havana status
 
   As a practical example for Nova: in our case that would simply include
 the
  following subtrees: nova/virt/hyperv and nova/tests/virt/hyperv.
 
  If maintainers of a particular driver would prefer this sort of
  autonomy, I'd rather look at creating new repositories.  I'm completely
  open to going that route on a per-driver basis.  Thoughts?

 I think that all drivers that are officially supported must be treated in
 the same way.

 If we are going to split out drivers into a separate but still official
 repository then we should do so for all drivers.  This would allow Nova
 core developers to focus on the architectural side rather than how each
 individual driver implements the API that is presented.

 Of course, with the current system it is much easier for a Nova core to
 identify and request a refactor or generalisation of code written in one or
 multiple drivers so they work for all of the drivers - we've had a few of
 those with XenAPI where code we have written has been pushed up into Nova
 core rather than the XenAPI tree.

 Perhaps one approach would be to re-use the incubation approach we have;
 if drivers want to have the fast-development cycles uncoupled from core
 reviewers then they can be moved into an incubation project.  When there is
 a suitable level of integration (and automated testing to maintain it of
 course) then they can graduate.  I imagine at that point there will be more
 development of new features which affect Nova in general (to expose each
 hypervisor's strengths), so there would be fewer cases of them being
 restricted just to the virt/* tree.

 Bob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've thought about this in the past, but always come back to a couple of
things.

Being a community driven project, if a vendor doesn't want to participate
in the project then why even pretend (ie having their own project/repo,
reviewers etc).  Just post your code up in your own github and let people
that want to use it pull it down.  If it's a vendor project, then that's
fine; have it be a vendor project.

In my opinion pulling out and leaving things up to the vendors as is being
described has significant negative impacts.  Not the least of which is
consistency in behaviors.  On the Cinder side, the core team spends the
bulk of their review time looking at things like consistent behaviors,
missing features or paradigms that are introduced that break other
drivers.  For example looking at things like, are all the base features
implemented, do they work the same way, are we all using the same
vocabulary, will it work in an multi-backend environment.  In addition,
it's rare that a vendor implements a new feature in their driver that
doesn't impact/touch the core code somewhere.

Having drivers be a part of the core project is very valuable in my
opinion.  It's also very important in my view that the core team for Nova
actually has some idea and notion of what's being done by the drivers that
it's supporting.  Moving everybody further and further into additional
private silos seems like a very bad direction to me, it makes things like
knowledge transfer, documentation and worst of all bug triaging extremely
difficult.

I could go on and on here, but nobody likes to hear anybody go on a rant.
 I would just like to see if there are other alternatives to improving the
situation than fragmenting the projects.

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 11:05 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2013-10-11 10:50:33 -0600 (-0600), Chris Friesen wrote:
  Sounds like we could use some kind of layer that will zero out
  blocks on read if they haven't been written by that user.
 [...]

 You've mostly just described thin provisioning... reads to
 previously unused blocks are returned empty/all-zero and don't get
 allocated actual addresses on the underlying storage medium until
 written.


+1, which by the way was the number one driving factor for adding the thin
provisioning LVM option in Grizzly.

 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread John Griffith
On Fri, Oct 11, 2013 at 12:43 PM, David Kranz dkr...@redhat.com wrote:

  On 10/11/2013 02:03 PM, Alessandro Pilotti wrote:





  On Oct 11, 2013, at 19:29 , Russell Bryant rbry...@redhat.com
  wrote:

 On 10/11/2013 12:04 PM, John Griffith wrote:


Umm... just to clarify the section below is NOT from my message.  :)


 [... snip ...]


  Talking about new community involvements, newcomers are getting very
 frustrated to have to wait for weeks to get a meaningful review and I
 cannot blame them if they don't want to get involved anymore after the
 first patch!
 This makes appear public bureocracy here in eastern Europe a lightweight
 process in comparison! :-)

  Let me add another practical reason about why a separate OpenStack
 project would be a good idea:

  Anytime that we commit a driver specific patch, a lot of Tempests tests
 are executed on Libvirt and XenServer (for Icehouse those will be joined by
 another pack of CIs, including Hyper-V).
 On the jenkins side, we have to wait for regression tests that have
 nothing to do with the code that we are pushing. During the H3 push, this
 meant waiting for hours and hoping not to have to issue the 100th recheck
 / revery bug xxx.

  A separate project would obviously include only the required tests and
 be definitely more lightweight, offloading quite some work from the
 SmokeStack / Jenkins job for everybody's happiness.


  I'm glad you brought this up. There are two issues here, both discussed
 by the qe/infra groups and others at the Havana summit and after.

 How do you/we know which regression tests have nothing to do with the code
 changed in a particular patch? Or that the answer won't change tomorrow?
 The only way to do that is to assert dependencies and non-dependencies
 between components that will be used to decide which tests should be run
 for each patch. There was a lively discussion (with me taking your side
 initially) at the summit and it was decided that a generic wasting
 resources argument was not sufficient to introduce that fragility and so
 we would run the whole test suite as a gate on all projects. That decision
 was to be revisited if resources became a problem.

 As for the 100th recheck, that is a result of the recent introduction of
 parallel tempest runs before the Havana rush. It was decided that the
 benefit in throughput from drastically reduced gate job times outweighed
 the pain of potentially doing a lot of rechecks. For the most part the bugs
 being surfaced were real OpenStack bugs that were showing up due to the new
 stress of parallel test execution. This was a good thing, though
 certainly painful to all. With hindsight I'm not sure if that was the right
 decision or not.

 This is just an explanation of what has happened and why. There are
 obviously costs and benefits of being tightly bound to the project.

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] TC candidacy

2013-10-07 Thread John Griffith
Hi,

I'd like to propose my candidacy for a seat on the OpenStack Technical
Committee.

I've been an ATC working full time on OpenStack for about a year and a half
now.  I was currently re-elected as PTL for the Cinder project which I
started back in the Folsom release.  I've also had the privilege of serving
on the TC as a result of my role as PTL.  My goal over the past year and a
half has been focused on building the Cinder project and getting it on it's
way to being a healthy, diverse and active community driven project.
 During that time I've taken an active interest in all things OpenStack,
and over the next year I'd like to continue growing that interest and
participating more in OpenStack and it's future as a whole.

As far as my background, I'm not associated with a specific OpenStack
Distribution or a Service Provider, but I am employed by a storage startup
(SolidFire Inc) specifically to contribute to OpenStack as a whole.  I
believe that I have a slightly different (and valuable) perspective on
OpenStack.  Coming from a device vendor, and a company that implements an
OpenStack private cloud in house, I have a strong interest in the
user-experience, whether that user be the dev-ops or sys-admin's deploying
OpenStack or the end-user actually consuming the resources made available.
 My emphasis is on compatibility, regardless of distribution, hardware
devices deployed, virtualization technologies used etc. I spend a lot of my
time talking and more importantly, listening to a variety of folks about
OpenStack, including vendors and most of all folks that are implementing
OpenStack.  I like to hear their feedback regarding what's working, what's
not and how we can do better.  I'd like the opportunity to take that
feedback and help drive towards an ever improving OpenStack.

I believe that the TC (as well as the role of PTL) actually serves an
important function in the community.  In both cases these roles in my
opinion should take into account acting as an advocate for the overall well
being of OpenStack and it's technical direction.  It has nothing to do with
titles or special benefits, it's just a lot of extra hard work that
needs to be done, and not everybody is willing to do it, as well as
providing a point of contact for folks that are looking for technical
answers or explanation.

To me, this means much more than just voting on proposed new projects.  New
projects and growth are important to OpenStack however I don't think that
uncontrolled and disjointed growth in the form of new projects is a good
thing, in fact I think it's detrimental to OpenStack as a whole.  I
personally would like to see the TC have more involvement in terms of
recommending/investigating new projects before they're proposed or started
by others.  By the same token, I'd also like to see the TC take a more
active role in the projects we currently have and how they all tie
together.  I personally believe that having 10 or so individual projects
operating in their own silos is not the right direction.  My opinion here
does NOT equate to more control, but instead should equate to being more
helpful.  With the continued growth of OpenStack I believe it's critical to
have some sort of vision and some resources that have a deep understanding
of the entire eco-system.

If you have any questions about my views, opinions or anything feel free to
drop me an email or hit me up on irc.

Thanks,
John

OpenStack code contributions:
https://review.openstack.org/#/q/status:merged+owner:%2522John+Griffith%2522,n,z
OpenStack code reviews:
https://review.openstack.org/#/q/reviewer:%2522John+Griffith%2522,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTL Voting is now open

2013-09-27 Thread John Griffith
On Fri, Sep 27, 2013 at 11:12 AM, Anita Kuno ante...@anteaya.info wrote:

 Elections are underway and will remain open for you to cast your vote
 until at least 11:59 UTC October 3, 2013.

 We are having elections for Cinder, Heat and Horizon.

 If you are a Foundation individual member and had a commit in one of the
 program projects over the Grizzly-Havana timeframe (from 2012-09-27 to
 2013-09-26, 23:59 PST) then you are eligible to vote. You should find your
 email with a link to the Condorcet page to cast your vote in the inbox of
 the email gerrit knows about.

 What to do if you don't see the email and have a commit in at least one of
 Cinder, Heat or Horizon:
 * check the trash of your email, in case it went in there
 * wait a bit and check again, in case your email server is a bit slow
 * find the sha of at least one commit from of the the program project
 repos and email me at the above email address. If I can confirm that you
 are entitled to vote, I will add you to the voters list for this election.

 Our democratic process is important to the health of OpenStack, please
 exercise your right to vote.

 Candidate statements/platforms can be found linked to Candidate names on
 this page: 
 https://wiki.openstack.org/**wiki/PTL_Elections_Fall_2013https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013

 Happy voting,
 Anita.


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Anita,

Any info on *when* the voting email is expected to go out?  Or are you
saying we should have already received it?

John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-25 Thread John Griffith
On Wed, Sep 25, 2013 at 12:15 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

 I agree with all that u guys are saying and I think that the current PTL's
 have done a great job and I know that there is a lot to take under
 consideration when submitting a potential PTL candidacy and that its all
 about delegating, integrating, publicizing.

 I don't think any of that is in question.

 I am just more concerned about the 'diversity' issue, which looking at
 https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates is imho
 lacking (1 person elections aren't really elections). Now of course this
 may not be an immediate problem, but it does not seem to be the ideal
 situation a community would be in; I just imagine a community that has a
 multi-person elections (those multi-people don't need to be at each others
 throats, or even competitors, or any of that) and which thrives off the
 diversity of those different people.

 It just seems like something we can work on as a community, to ensure that
 there is diversity.

 -Josh

 On 9/25/13 4:31 AM, Flavio Percoco fla...@redhat.com wrote:

 On 25/09/13 11:29 +0200, Thierry Carrez wrote:
 Joshua Harlow wrote:
  +2
 
  I think we need to as a community figure out why this is the case and
  figure out ways to make it not the case.
 
  Is it education around what a PTL is? Is it lack of time? Is it
 something
  else?
 
 In my view the PTL handles three roles: final decider on
 program-specific issues, release management liaison (for programs
 containing an integrated project) and program ambassador (natural point
 of contact). Note that the last two roles can be delegated.
 
 If you don't delegate anything then it's a lot of work, especially for
 programs with large integrated projects -- so if the current PTL does a
 great job and runs for election again, I suspect everyone else doesn't
 feel the urge to run against him.
 
 FWIW I don't think established PTLs mind being challenged at all. If
 anything, in the past this served to identify people interested in
 project management that could help in the PTL role and serve in a
 succession strategy. So you shouldn't fear to piss of the established
 PTL by challenging them :)
 
 
 I agree with Thierry here.
 
 The PTL role takes time and dedication which is the first thing people
 must be aware of before submitting their candidacy. I'm very happy
 with the job current PTLs have done, although I certainly don't have a
 360 view. This should also be taken under consideration, before
 submitting a PTL candidacy, I expect people to ask themselves - and
 then share with others - what their plan is for the next development
 cycle, how they can improve the project they want to run for, etc.
 
 IMHO, the fact that there hasn't been many candidacies means that
 folks are happy with the work current PTLs have done and would love to
 have them around for another release cycle. However, this doesn't mean
 that folks that have submitted their candidacy are not happy with the
 current PTL and I'm very happy to see other folks willing to run for
 the PTL possition.
 
 I also think that PTLs have integrated the community at large in their
 PTL role and this has definitely helped folks to participate in the
 decision process. I've never thought about PTLs as final deciders but
 as the ones responsible for leading the team towards a decision that
 reflects the best interest of the project.
 
 That being said, I wouldn't worry that much for not seeing so many
 candidacies. I think this fits into the Lazy Consensus concept.
 
 Cheers,
 FF
 
 --
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've put a request to all those in the Cinder team meeting this morning
that if they have any interest/desire that they should freely submit their
candidacy today (I've even advised some folks that I felt they would make
good candidates).  Other than openly encouraging others to run for the
position I'm not quite sure what folks would like to propose with respect
to this thread and the concerns that they have raised.  I've also had
conversations in IRC with multiple cinder-core team members to the same
effect.

The fact is you can't force people to run for the position, however you
can make it clear that it's an open process and encourage folks that have
interest.  I think we've always done that, and I think now even more than
before we've made it explicit.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack] Cinder PTL Candidacy

2013-09-21 Thread John Griffith
Hello Everybody,

I would like to run for election as Cinder PTL for the upcoming I release.

For those who don't know me, I am the current Cinder PTL and have been
working on OpenStack for almost two years now.  Most of that time has been
spent leading the Cinder efforts starting with the separation of the
Nova-Volume code.  The upcoming Havana release will mark our third release
with Cinder as a released project in OpenStack and I think we have
continued to grow and get better with each release, Havana being no
exception.

Some of you may know that there's a lot that goes in to being a PTL.  I see
it as a combination of Project/Team management, Technical Leadership and
Evangelism.  I love the job, every aspect of it as well as the challenges
that come with it.  This includes talking about Cinder, brainstorming new
ideas and *recruiting* new vendors and contributors.

Given Cinders unique plugin model (we now have over 17 supported back end
device drivers), one of the biggest challenges is continuing to provide and
maintain a feature equivalent and robust reference implementation as well
as maintaining consistent behaviors across all of these drivers.  At the
same time we've provided ways for different vendors to expose their own
unique features via optimizations or the use of types and extra-specs.  The
mantra here has been, if vendor A wants to implement a new feature there
has to be a way to implement it across the board.  It may not be the most
elegant or efficient for some back-ends, but the idea is you will have
consistent capabilities and behaviors.  One of the things that I think
makes the above described philosophy so powerful is that it drives interest
and contributions to the project as a whole.  The result is that Cinder has
multiple contributors from competing storage vendors all driving to advance
the overall project for EVERYBODY.  I think we've done a fantastic job in
this respect and if elected I plan to continue to drive this as one of the
main ideologies for the project.

There are a number of things that I see as needing particular focus during
the upcoming release, and if elected I will try and drive these items:

1. Functional test qualification for back end devices
Many of you have heard about this via mailing list, but I think it's
critical that we have some way to share publicly that the drivers that are
shipped with cinder actually work.  This is something that I've started and
plan to implement for the I release, it will start simply by requiring that
all of the tests we currently run in the CI gates are run against each
driver/back-end and the results of those runs are submitted publicly.

2. More involvement in Horizon
This has been something that I've thought of and mentioned in the past but
it hasn't really happened.  For the upcoming release I would like to drive
folks that implement new features in Cinder to help contribute and get
those same features exposed in Horizon.  In my opinion OpenStack is growing
so quickly and becoming so large, that there needs to be more contribution
from all projects to keep Horizon updated.  I'm not saying something as
extreme as to have a rule that a BP is not closed/implemented until the
Horizon work is done, but I'd love to see folks have that sort of view
(un-enforced norm in terms of behaviors) with features they implement in
Cinder as we go forward.

3. Organization and strategy around common libraries for Block Storage
This is something that we talked about during the last summit and we made
some progress on.  The problem is that it started to grow in to it's own
beast and I think we lost our focus.  I'd like to really emphasize early in
the Icehouse release what our common goals and objectives are here and make
this a reality early on in the release cycle (preferably the first
milestone).

4. Continued implementation of task-flows and states
We've made a good initial start here by adopting task-flows for our
volume-create process.  I think there's a lot more that can be done to
improve upon what we have here, and also to spread that out to the other
Cinder tasks.

5. More interaction with the other projects
The worst thing for any project in OpenStack in the coming release IMO is
going to be working in isolation.  There are a large number of new
projects/programs being introduced, many of which that utilize bits and
pieces of all OpenStack projects.  I think it's critical that we come up
with some ways to collaborate across projects and programs, not only for
better implementations in consumer projects, but also to make sure we
provide valuable features that might be needed.  This particularly includes
projects like Trove, Ironic and Triple'O.

6. Continue to grow the contributing community
This is key IMO for any Open Source project, we need to continue to grow
the interest and contributions.  Whether that be via new drivers/vendor
participation or new core features, I believe involvement and particularly
community growth is a 

Re: [openstack-dev] FFE Request: Make RBD Usable for Ephemeral Storage

2013-09-18 Thread John Griffith
On Wed, Sep 18, 2013 at 8:14 AM, Thierry Carrez thie...@openstack.orgwrote:

 Mike Perez wrote:
  Currently in Havana development, RBD as ephemeral storage has serious
  stability
  and performance issues that makes the Ceph cluster a bottleneck for
 using an
  image as a source.
  [...]

 This comes up a bit late, and the current RC bugs curves[1] really do
 not encourage me to add more distraction for core reviewers.

 The only way I could be fine with this would be for the performance
 issue to actually be considered a bug (being so slow you can't really
 use it without the fix), *and* the review being very advanced and
 consensual that the distraction is minimal.

 Could you quantify the performance issue, and address Zhi Yan Liu's
 comments ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I have to say that this seems EXTREMELY late to be raising as an issue now.
 I also have to say that I don't see that this would be that critical as
it's something that's never been raised up until this point.  If it were a
reported issue that we just never got around to addressing that might be
different.

As Thierry pointed out the bug trajectory is not quite what we want yet
anyway, so reworking a feature that works but just doesn't work as
efficiently as it could/should doesn't seem like it meets the requirements
for an FFE at all.  All of these things combined with how late the request
is it seems to me like it's pretty difficult to consider this for Havana.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Generalize config file settings

2013-09-12 Thread John Griffith
On Thu, Sep 12, 2013 at 9:36 AM, Dean Troyer dtro...@gmail.com wrote:

 DevStack has long had a config setting in localrc called EXTRA_OPTS that
 allowed arbitrary settings to be added to /etc/nova/nova.conf [DEFAULT]
 section.  Additional files and sections have recently been implemented with
 a similar scheme.  I don't think this scales well as at a minimum every
 config file and section needs to be individually handled.

 I'd like to get some feedback on the following two proposals, or hear
 other ideas on how to generalize solving the problem of setting arbitrary
 configuration values.


 a) Create conf.d/*.conf files as needed and process each file present into
 a corresponding config file.  These files would not be supplied by DevStack
 but created and maintained locally.

 Example: conf.d/etc/nova/nova.conf:
 [DEFAULT]
 use_syslog = True

 [osapi_v3]
 enabled = False


 b) Create a single service.local.conf file for each project (Nova, Cinder,
 etc) that contains a list of settings to be applied to the config files for
 that service.

 Example: nova.local.conf:
 # conf file names are parsed out of the section name below between '[' and
 the first ':'
 [/etc/nova/nova.conf:DEFAULT]
 use_syslog = True

 [/etc/nova/nova.conf:osapi_v3]
 enabled = False


 Both cases need to be able to specify the destination config file and
 section in addition to the attribute name and value.

 Thoughts?
 dt

 [Prompted by review https://review.openstack.org/44266]

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 +1 for not dumping in lib/xxx

Option 'a' seems like a bit easier to manage in terms of number of files
etc but I wouldn't have a strong preference between the two options
presented.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Flash storage article

2013-09-12 Thread John Griffith
http://tinyurl.com/ljexdyk
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Generalize config file settings

2013-09-12 Thread John Griffith
On Thu, Sep 12, 2013 at 9:44 PM, Monty Taylor mord...@inaugust.com wrote:

 os-apply-config


Doesn't that just convert a json syntax to a file with the syntax Dean was
describing?  Maybe it's changed, but that's what I *thought* it did.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cookiecutter repo for ease in making new projects

2013-09-12 Thread John Griffith
On Thu, Sep 12, 2013 at 11:08 PM, Monty Taylor mord...@inaugust.com wrote:

 Hey everybody!

 You know how, when you want to make a new project, you basically take an
 existing one, like nova, copy files, and then start deleting? Nobody
 likes that.

 Recently, cookiecutter came to my attention, so we put together a
 cookiecutter repo for openstack projects to make creating a new one easier:

 https://git.openstack.org/cgit/openstack-dev/cookiecutter

 It's pretty easy to use. First, install cookiecutter:

 sudo pip install cookiecutter

 Next, tell cookiecutter you'd like to create a new project based on the
 openstack template:

 cookiecutter git://git.openstack.org/openstack-dev/cookiecutter.git

 Cookiecutter will then ask you three questions:

 a) What repo groups should it go in? (eg. openstack, openstack-infra,
 stackforge)
 b) What is the name of the repo? (eg. mynewproject)
 c) What is the project's short description? (eg. OpenStack Wordpress as
 a Service)

 And boom, you'll have a directory all set up with your new project ready
 and waiting for a git init ; git add . ; git commit

 Hope this helps folks out - and we'll try to keep it up to date with
 things that become best practices - patches welcome on that front.

 Monty

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Nice!!  Just took it for a spin, worked great!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-09 Thread John Griffith
On Mon, Sep 9, 2013 at 1:20 PM, Jarret Raim jarret.r...@rackspace.comwrote:



 On 9/9/13 9:25 AM, Russell Bryant rbry...@redhat.com wrote:

 On 09/09/2013 04:57 AM, Thierry Carrez wrote:
  Russell Bryant wrote:
  I would be good with the exception for this, assuming that:
 
  1) Those from nova-core that have reviewed the code are still happy
 with
  it and would do a final review to get it merged.
 
  2) There is general consensus that the simple config based key manager
  (single key) does provide some amount of useful security.  I believe it
  does, just want to make sure we're in agreement on it.  Obviously we
  want to improve this in the future.
 
  +1
 
  I think this is sufficiently self-contained that the regression risk is
  extremely limited. It's also nice to have a significant hardening
  improvement in the Havana featurelist. I would just prefer if it landed
  ASAP since I would like as much usage around it as we can get, to make
  sure the previous audits didn't miss an obvious bug/security hole in it.
 
 
 The response seems positive from everyone so far.  I think we should
 approve this and try to get it merged ASAP (absolutely this week, and
 hopefully in the first half of the week).
 
 ACK on the FFE from me.


 Me as well for what it's worth. While I understand the concerns around key
 management, Barbican will have our 1.0 release for Havana and it should be
 relatively easy to integrate the proposed patches with Barbican at that
 time. Even so, the current version does offer some security and gives us
 the ability to have the code tested before we introduce another moving
 part.


 Thanks,
 Jarret Raim


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Fine on the Cinder side for the related components there.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [brick] Status and plans for the brick shared volume code

2013-09-05 Thread John Griffith
On Thu, Sep 5, 2013 at 2:04 AM, Thierry Carrez thie...@openstack.orgwrote:

 John Griffith wrote:
  The code currently is and will be maintained in Cinder, and the Cinder
  team will sync changes across to Nova.  The first order of business for
  Icehouse will be to get the library built up and usable, then convert
  over to using that so as to avoid the syncing issues.

 This may have been discussed before, but is there any reason to avoid
 the Oslo incubator for such a library ?

Not really no, in fact that's always been a consideration (
https://blueprints.launchpad.net/oslo/+spec/shared-block-storage-library)

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread John Griffith
On Tue, Sep 3, 2013 at 7:27 PM, Bryan D. Payne bdpa...@acm.org wrote:


How can someone use your code without a key manager?

 Some key management mechanism is required although it could be
 simplistic. For example, we’ve tested our code internally with an
 implementation of the key manager interface that returns a single, constant
 key.

 That works for testing but doesn't address: the current dearth of key
 management within OpenStack does not preclude the use of our existing work
 within a production environment


 My understanding here is that users are free to use any key management
 mechanism that they see fit.  This can be a simple return a static key
 option.  Or it could be using something more feature rich like Barbican.
  Or it could be something completely home grown that is suited to a
 particular OpenStack deployment.

 I don't understand why we are getting hung up on having a key manager as
 part of OpenStack in order to accept this work.  Clearly there are other
 pieces of OpenStack that have external dependencies (message queues, to
 name one).

 I, for one, am looking forward to using this feature and would be very
 disappointed to see it pushed back for yet another release.



  Is a feature complete if no one can use it?

 I am happy with a less then secure but fully functional key manager.  But
 with no key manager that can be used in a real deployment, what is the
 value of including this code?


 Of course people can use it.  They just need to integrate with some
 solution of the deployment's choosing that provides key management
 capabilities.  And, of course, if you choose to not use the volume
 encryption then you don't need to worry about it at all.

 I've watched this feature go through many, many iterations throughout both
 the Grizzly and Havana release cycles.  The authors have been working hard
 to address everyone's concerns.  In fact, they have navigated quite a
 gauntlet to get this far.  And what they have now is an excellent, working
 solution.  Let's accept this nice security enhancement and move forward.

 Cheers,
 -bryan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Do you have any docs or guides describing a reference implementation that
would be able to use this in the manner you describe?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-30 Thread John Griffith
On Fri, Aug 30, 2013 at 8:02 AM, Murali Balcha murali.bal...@triliodata.com
 wrote:

  Hi John,
 Thanks for your comments. I am planning to attend summit we can have a
 wider discussion there.

 Thanks,
 Murali Balcha


 On Aug 30, 2013, at 12:05 AM, John Griffith john.griff...@solidfire.com
 wrote:




 On Thu, Aug 29, 2013 at 6:36 PM, Murali Balcha 
 murali.bal...@triliodata.com wrote:


  My question is, would it make sense to add to the current mechanisms
 in
  Nova and Cinder than add the complexity of a new project?
 
  I think the answer is yes  :)


  I meant there is a clear need for Raksha project. :)

 Thanks,
 Murali Balcha

 On Aug 29, 2013, at 7:45 PM, Murali Balcha 
 murali.bal...@triliodata.com wrote:

 
  
  From: Ronen Kat ronen...@il.ibm.com
  Sen: Thursday, August 29, 2013 2:55 PM
  To: openstack-dev@lists.openstack.org;
 openstack-...@lists.launchpad.net
  Subject: Re: [openstack-dev] Proposal for Raksha, a Data Protection
 As a Service project
 
  Hi Murali,
 
  I think the idea to provide enhanced data protection in OpenStack is a
  great idea, and I have been thinking about  backup in OpenStack for a
 while
  now.
  I just not sure a new project is the only way to do.
 
  (as disclosure, I contributed code to enable IBM TSM as a Cinder
 backup
  driver)
 
  Hi Kat,
  Consider the following use cases that Raksha will addresses. I will
 discuss from simple to complex use case and then address your specific
 questions with inline comments.
  1.VM1 that is created on the local file system with a cinder volume
 attached
  2.VM2 that is booted off from a cinder volume and has couple of
 cinder volumes attached
  3.VM1 and VM2 all booted from cinder volumes and has couple of
 volumes attached. They also share a private network for internal
 communication.
  4.
  In all these cases Raksha will take a consistent snap of VMs, walk thru
 each VM resources and backup the resources to swift end point.
  In case 1, that means backup VM image and Cinder volume image to swift
  In case 2 is an extension of case 1.
  In case 3, Raksha not only backup VM1 and VM2 and its associated
 resources, it also backup the network configuration
 
  Now lets consider the restore case. The restore operation walks thru
 the backup resources and calls into respective openstack services to
 restore those objects. In case1, it first calls Nova API to restore the VM,
 it calls into Cinder to restore the volume and attach the volume to the
 newly restored VM instance. In case of 3, it also calls into Neutron API to
 restore the networking. Hence my argument is that not one OpenStack project
 has a global view of VM and all its resources to implement an effective
 backup and restore services.
 
 
  I wonder what is the added-value of a project approach versus
 enhancements
  to the current Nova and Cinder implementations of backup. Let me
 elaborate.
 
  Nova has a nova backup feature that performs a backup of a VM to
 Glance,
  the backup is managed by tenants in the same way that you propose.
  While today it provides only point-in-time full backup, it seems
 reasonable
  that it can be extended support incremental and consistent backup as
 well -
  as the actual work is done either by the Storage or Hypervisor in any
 case.
 
  Though Nova has API to upload a snapshot of the VM to glance, it does
 not snapshot any volumes associated with the VM. When a snapshot is
 uploaded to glance, Nova creates an image by collapsing the qemu image with
 delta file and uploads the larger file to glance. If we were to perform
 periodic backups of VMs, this is a very inefficient way to do backup. Also
 having to manage two end points, one for Nova and Cinder is inefficient.
 These are the gaps I called out in Raksha wiki page.
 
 
  Cinder has a cinder backup command that performs a volume backup to
 Swift,
  Ceph or TSM. The Ceph implementation also support incremental backup
 (Ceph
  to Ceph).
  I envision that Cinder could be expanded to support incremental
 backup (for
  persistent storage) by adding drivers/plug-ins that will leverage
  incremental backup features of either the storage or Hypervisors.
  Independently, in Havana the ability to do consistent volume
 snapshots was
  added to GlusterFS. I assume that this consistency support could be
  generalized to support other volume drivers, and be utilized as part
 of a
  backup code.
 
  I think we are talking specific implementations here. Yes, I am aware
 of Ceph blueprint to support incremental backup, but Cinder backup APIs are
 volume specific. That means if a VM has multiple volumes mapped as in the
 case 2 I discussed, tenant need to call backup api three times. Also if you
 look at the swift layout of the cinder, it is very difficult to tie the
 swift images back to a particular VM. Imagine a tenant were to restore a VM
 and all its resources from a backup copy that was performed a week ago. The
 restore operation

Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-29 Thread John Griffith
On Thu, Aug 29, 2013 at 6:36 PM, Murali Balcha murali.bal...@triliodata.com
 wrote:


  My question is, would it make sense to add to the current mechanisms in
  Nova and Cinder than add the complexity of a new project?
 
  I think the answer is yes  :)


 I meant there is a clear need for Raksha project. :)

 Thanks,
 Murali Balcha

 On Aug 29, 2013, at 7:45 PM, Murali Balcha murali.bal...@triliodata.com
 wrote:

 
  
  From: Ronen Kat ronen...@il.ibm.com
  Sen: Thursday, August 29, 2013 2:55 PM
  To: openstack-dev@lists.openstack.org;
 openstack-...@lists.launchpad.net
  Subject: Re: [openstack-dev] Proposal for Raksha, a Data Protection As
 a Service project
 
  Hi Murali,
 
  I think the idea to provide enhanced data protection in OpenStack is a
  great idea, and I have been thinking about  backup in OpenStack for a
 while
  now.
  I just not sure a new project is the only way to do.
 
  (as disclosure, I contributed code to enable IBM TSM as a Cinder backup
  driver)
 
  Hi Kat,
  Consider the following use cases that Raksha will addresses. I will
 discuss from simple to complex use case and then address your specific
 questions with inline comments.
  1.VM1 that is created on the local file system with a cinder volume
 attached
  2.VM2 that is booted off from a cinder volume and has couple of
 cinder volumes attached
  3.VM1 and VM2 all booted from cinder volumes and has couple of
 volumes attached. They also share a private network for internal
 communication.
  4.
  In all these cases Raksha will take a consistent snap of VMs, walk thru
 each VM resources and backup the resources to swift end point.
  In case 1, that means backup VM image and Cinder volume image to swift
  In case 2 is an extension of case 1.
  In case 3, Raksha not only backup VM1 and VM2 and its associated
 resources, it also backup the network configuration
 
  Now lets consider the restore case. The restore operation walks thru the
 backup resources and calls into respective openstack services to restore
 those objects. In case1, it first calls Nova API to restore the VM, it
 calls into Cinder to restore the volume and attach the volume to the newly
 restored VM instance. In case of 3, it also calls into Neutron API to
 restore the networking. Hence my argument is that not one OpenStack project
 has a global view of VM and all its resources to implement an effective
 backup and restore services.
 
 
  I wonder what is the added-value of a project approach versus
 enhancements
  to the current Nova and Cinder implementations of backup. Let me
 elaborate.
 
  Nova has a nova backup feature that performs a backup of a VM to
 Glance,
  the backup is managed by tenants in the same way that you propose.
  While today it provides only point-in-time full backup, it seems
 reasonable
  that it can be extended support incremental and consistent backup as
 well -
  as the actual work is done either by the Storage or Hypervisor in any
 case.
 
  Though Nova has API to upload a snapshot of the VM to glance, it does
 not snapshot any volumes associated with the VM. When a snapshot is
 uploaded to glance, Nova creates an image by collapsing the qemu image with
 delta file and uploads the larger file to glance. If we were to perform
 periodic backups of VMs, this is a very inefficient way to do backup. Also
 having to manage two end points, one for Nova and Cinder is inefficient.
 These are the gaps I called out in Raksha wiki page.
 
 
  Cinder has a cinder backup command that performs a volume backup to
 Swift,
  Ceph or TSM. The Ceph implementation also support incremental backup
 (Ceph
  to Ceph).
  I envision that Cinder could be expanded to support incremental backup
 (for
  persistent storage) by adding drivers/plug-ins that will leverage
  incremental backup features of either the storage or Hypervisors.
  Independently, in Havana the ability to do consistent volume snapshots
 was
  added to GlusterFS. I assume that this consistency support could be
  generalized to support other volume drivers, and be utilized as part
 of a
  backup code.
 
  I think we are talking specific implementations here. Yes, I am aware of
 Ceph blueprint to support incremental backup, but Cinder backup APIs are
 volume specific. That means if a VM has multiple volumes mapped as in the
 case 2 I discussed, tenant need to call backup api three times. Also if you
 look at the swift layout of the cinder, it is very difficult to tie the
 swift images back to a particular VM. Imagine a tenant were to restore a VM
 and all its resources from a backup copy that was performed a week ago. The
 restore operation is not straight forward.
  It is my understanding that consistency should be maintained at the VM,
 not at individual volume. It is very difficult to assume how the
 application data inside VM is laid out.
 
  Looking at the key features in Raksha, it seems that the main features
  (2,3,4,7) could be addressed 

[openstack-dev] [OpenStack-dev] Rechecks and Reverifies

2013-08-27 Thread John Griffith
This message has gone out a number of times but I want to stress
(particularly to those submitting to Cinder) the importance of logging
accurate recheck information.  Please take the time to view the logs on a
Jenkins fail before blindly entering recheck no bug.  This is happening
fairly frequently and quite frankly it does us no good if we don't look at
the failure and capture things that might be going wrong in the tests.

It's not hard, the CI team has put forth a good deal of effort to actually
make it pretty easy.  There's even a how to proceed link provided upon
failure to walk you through the steps.  The main thing is you have to look
at the console output from your failed job.  Also just FYI, pep8 and
py26/27 failures are very rarely no bug they are usually a real problem
in your patch.  It would be good to pay particular attention to these
before hitting recheck no bug.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev] Rechecks and Reverifies

2013-08-27 Thread John Griffith
On Tue, Aug 27, 2013 at 11:47 AM, Clark Boylan clark.boy...@gmail.comwrote:

 On Tue, Aug 27, 2013 at 10:15 AM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from John Griffith's message of 2013-08-27 09:42:37 -0700:
  On Tue, Aug 27, 2013 at 10:26 AM, Alex Gaynor alex.gay...@gmail.com
 wrote:
 
   I wonder if there's any sort of automation we can apply to this, for
   example having known rechecks have signatures and if a failure
 matches
   the signature it auto applies the recheck.
  
 
  I think we kinda already have that, the recheck list and the bug ID
  assigned to it no?  Automatically scanning said list and doing the
 recheck
  automatically seems like overkill in my opinion.  At some point human
  though/interaction is required and I don't think it's too much to ask a
  technical contributor to simply LOOK at the output from the test runs
  against their patches and help out a bit. At the very least if you
 didn't
  test your patch yourself and waited for Jenkins to tell you it's broken
 I
  would hope that a submitter would at least be motivated to fix their own
  issue that they introduced.
 
 
  It is worth thinking about though, because ask a technical contributor
  to simply LOOK is a lot more expensive than let a script confirm the
  failure and tack it onto the list for rechecks.
 
  Ubuntu has something like this going for all of their users and it is
  pretty impressive.
 
  Apport and/or whoopsie see crashes and look at the
  backtraces/coredumps/etc and then (with user permission) submit a
  signature to the backend. It is then analyzed and the result is this:
 
  http://errors.ubuntu.com/
 
  Known false positives are shipped along side packages so that they do
  not produce noise, and known points of pain for debugging are eased by
  including logs and other things in bug reports when users are running
  the dev release. This results in a much better metric for what bugs to
  address first. IIRC update-manager also checks in with a URL that is
  informed partially by this data about whether or not to update packages,
  so if there is a high fail rate early on, the server side will basically
  signal update-manager don't update right now.
 
  I'd love to see our CI system enhanced to do all of the pattern
  matching to group failures by common patterns, and then when a technical
  contributor looks at these groups they have tons of data points to _fix_
  the problem rather than just spending their precious time identifying it.
 
  The point of the recheck system, IMHO, isn't to make running rechecks
  easier, it is to find and fix bugs.
 
 This is definitely worth thinking about and we had a session on
 dealing with CI logs to do interesting things like update bugs and
 handle rechecks automatically at the Havana summit[0]. Since then we
 have built a logstash + elasticsearch system[1] that filters many of
 our test logs and indexes a subset of what was filtered (typically
 anything with a log level greater than DEBUG). Building this system is
 step one in being able to detect anomalous logs, update bugs, and
 potentially perform automatic rechecks with the appropriate bug.
 Progress has been somewhat slow, but the current setup should be
 mostly stable. If anyone is interested in poking at these tools to do
 interesting automation with them feel free to bug the Infra team.

 That said, we won't have something super automagic like that before
 the end of Havana making John's point an important one. If previous
 release feature freezes are any indication we will continue to put
 more pressure on the CI system as we near Havana's feature freeze. Any
 unneeded rechecks or reverifies can potentially slow the whole process
 down for everyone. We should be running as many tests as possible
 locally before pushing to Gerrit (this is as simple as running `tox`)
 and making a best effort to identify the bugs that cause failures when
 performing rechecks or reverifies.

 [0] https://etherpad.openstack.org/havana-ci-logging
 [1] http://ci.openstack.org/logstash.html

 Thank you,
 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The automation ideas are great, no argument there didn't mean to imply they
weren't or discount them.  Just don't want the intent of the message to get
lost in all the things we could do going forward.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread John Griffith
On Tue, Aug 27, 2013 at 12:14 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/27/2013 01:30 PM, Matt Dietz wrote:
  Good idea!
 
  Only thing I would point out is there are a fair amount of changes,
  especially lately, where code is just moving from one portion of the
  project to another, so there may be cases where someone ends up being
  authoritative over code they don't totally understand.

 Right.  While some automation can provide some insight, it certainly can
 not make any decisions in this area, IMO.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


All great ideas, but really isn't the core of the issue rate of new patches
 rate of available reviewers?

Seems to me that with the growth of the projects and more people
contributing the number of people actively involved in reviews is not
keeping pace.  Then throw in all of the new projects which takes at least
a portion of someone who used to do all Nova all the time and now they're
spreading that work-load across 3 or 4 projects it seems the only solution
is more reviewers.

Prioritizing and assigning maintainers is a great idea and I think we've
all kinda feel into that unofficially any way, but there is a need for more
quality reviewers and to be quite honest with all of the new projects
coming in to play I think that problem is going to continue in to the next
release as well.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Continuous deployment - significant process change

2013-08-18 Thread John Griffith
On Sun, Aug 18, 2013 at 9:10 PM, Christopher Yeoh cbky...@gmail.com wrote:


 On Mon, Aug 19, 2013 at 5:51 AM, Robert Collins robe...@robertcollins.net
  wrote: - Stable branch maintenance becoming harder.

  The set of proposals being made to tackle this are:
  - Set a much harder upper bound on commit size - we were saying 500
 lines, but the recent research paper suggests that saying 200 lines as
 target, with rubber band permitting up to 400 lines before we push
 back really hard.


 +1

 Though I think we probably could do with some better tools or tool
 improvements
 so we handle reviews of long series of dependent changesets better. As at
 least in my experience, patches
 in a dependent series tend to get reviewed a bit randomly and review
 effort is effectively lost on the later
 changesets when the inevitable rebase is required.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This was pretty well discussed back in April and May IMO.

Suffice it to say, I'm very much against the idea of 'disabled features'
landing in trunk, and I'm also not a fan of the idea of an arbitrary max
lines of code per patch set.  A number of folks have pointed out that we're
getting better at things like feature-rush at the end of a cycle and our
own community best practices enforcement on patch size.  I think that
model works well in an Open Source environment, particularly one the size
of OpenStack with the varied interest and participation.

IMO intentionally placing non-working (and thereby useless code as far as
I'm concerned) in the project with no testing, no documentation and worst
of all no guarantee that anybody is ever going to work on said code again
is a bad idea.  The explosive growth of what OpenStack is and all of the
projects is pretty difficult for folks to get wrapped around already, let
alone if we start having this unbelievable matrix of flags, paralell
features etc.

Anyway, a number of postings are no longer tracked in this thread it seems,
but there have been statements from Russell B, Thierry and Michael Still
that I strongly agree with here.

By the way for those that want to go back and read the entire thread again
see the archive from April [1]

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-April/008235.html

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread John Griffith
Hi Amit,

I think part of what Thierry was eluding to was the fact that feature
freeze for Grizzly is next week.  Also in the past we've been trying to
make sure that folks did not introduce BP's for new drivers in the last
release mile-stone.  There are other folks that are in this position
however they've also proposed their BP's for their driver and sent updates
to the Cinder team since H1.

That being said, if you already have working code that you think is ready
and can be submitted we can see what the rest of the Cinder team thinks.
 No promises though that your code will make it in, there are a number of
things already in process that will take priority in terms of review time
etc.

Thanks,
John


On Tue, Aug 13, 2013 at 8:42 AM, Amit Das amit@cloudbyte.com wrote:

 Thanks a lot... This should give us a head start.

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez thie...@openstack.orgwrote:

 Amit Das wrote:
  We have implemented a CINDER driver for our QoS aware storage solution
  (CloudByte Elastistor).
 
  We would like to integrate this driver code with the next version of
  OpenStack (Havana).
 
  Please let us know the approval processes to be followed for this new
  driver support.

 See https://wiki.openstack.org/wiki/Release_Cycle and
 https://wiki.openstack.org/wiki/Blueprints for the beginning of an
 answer.

 Note that we are pretty late in the Havana cycle with lots of features
 which have been proposed a long time ago still waiting for reviews and
 merging... so it's a bit unlikely that a new feature would be added now
 to that already-overloaded backlog.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread John Griffith
On Tue, Aug 13, 2013 at 9:01 AM, John Griffith
john.griff...@solidfire.comwrote:

 Hi Amit,

 I think part of what Thierry was eluding to was the fact that feature
 freeze for Grizzly is next week.  Also in the past we've been trying to
 make sure that folks did not introduce BP's for new drivers in the last
 release mile-stone.  There are other folks that are in this position
 however they've also proposed their BP's for their driver and sent updates
 to the Cinder team since H1.

 That being said, if you already have working code that you think is ready
 and can be submitted we can see what the rest of the Cinder team thinks.
  No promises though that your code will make it in, there are a number of
 things already in process that will take priority in terms of review time
 etc.

 Thanks,
 John


 On Tue, Aug 13, 2013 at 8:42 AM, Amit Das amit@cloudbyte.com wrote:

 Thanks a lot... This should give us a head start.

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez thie...@openstack.orgwrote:

 Amit Das wrote:
  We have implemented a CINDER driver for our QoS aware storage solution
  (CloudByte Elastistor).
 
  We would like to integrate this driver code with the next version of
  OpenStack (Havana).
 
  Please let us know the approval processes to be followed for this new
  driver support.

 See https://wiki.openstack.org/wiki/Release_Cycle and
 https://wiki.openstack.org/wiki/Blueprints for the beginning of an
 answer.

 Note that we are pretty late in the Havana cycle with lots of features
 which have been proposed a long time ago still waiting for reviews and
 merging... so it's a bit unlikely that a new feature would be added now
 to that already-overloaded backlog.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I should clarify my posting, next week (August 21'st) is a FeatureProposal
freeze for the Cinder project.  Further explanation here: [1]

[1] https://wiki.openstack.org/wiki/FeatureProposalFreeze
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Dev] Block Storage libraries and shared code

2013-08-12 Thread John Griffith
Hey,

There have been a couple of block storage related patches in Nova lately
and I wanted to get some discussion going and also maybe increase some
awareness on some efforts that were discussed at the last summit.  To catch
up a bit here's the etherpad from the summit session [1].

First off, there was a patch to move Nova's LVM code in to OSLO (here [2]).
 This one is probably my fault for not having enough awareness out there
regarding our plans/goals with brick.  I'd like to hear from folks if the
brick approach is not sufficient or if there's some other reason that it's
not desirable (hopefully it's just that folks didn't know about it).

For reference/review the latest version of the brick/local_dev/lvm code is
here: [4].

One question we haven't answered on this yet is where this code should
ultimately live.  Should it be in OSLO, or should it be a separate library
that's part of Cinder and can be imported by other projects.  I'm mixed on
this for a number of reasons but I think either approach is fine.

The next item around this topic that came up was a patch to add support for
using RBD for local volumes in Nova (here [3]).  You'll notice a number of
folks mentioned brick on this, and I think that's the correct answer.  At
the same time while I think that's the right answer long term I also would
hate to see this feature NOT go in to H just because folks weren't aware of
what was going on in Brick.  It's a bit late in the cycle so my thought on
this is that I'd like to see this resubmitted using the brick/common
approach.  If that can't be done between now and the feature freeze for H3
I'd rather see the patch go in as is than have the feature not be present
at all for another release.  We can then address this when we get a better
story in place for brick.


[1] https://etherpad.openstack.org/havana-cinder-local-storage-library
[2] https://review.openstack.org/#/c/40795/
[3] https://review.openstack.org/#/c/36042/15
[4] https://review.openstack.org/#/c/38172/11/cinder/brick/local_dev/lvm.py
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread John Griffith
On Mon, Aug 12, 2013 at 9:15 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 This would need to happen on the cinder side on creation. I don't think it
 is safe for nova to be modifying the contents of the volume on attach. That
 said nova does currently set the serial number on attach (for libvirt at
 least) so the volume will show up as:

 /dev/disk/by-id/virtio-uuid

 Although the uuid gets truncated.

 Vish

 On Aug 10, 2013, at 10:11 PM, Greg Poirier greg.poir...@opower.com
 wrote:

  Since we can't guarantee that a volume, when attached, will become a
 specified device name, we would like to be able to create a filesystem and
 label it (so that we can programmatically interact with it when
 provisioning systems, services, etc).
 
  What we are trying to decide is whether this should be the
 responsibility of Nova or Cinder. Since Cinder essentially has all of the
 information about the volume and is already responsible for creating the
 volume (against the configured backend), why not also give it the ability
 to mount the volume (assuming support for it on the backend exists), run
 mkfs.filesystem_type, and then use tune2fs to label the volume with (for
 example) the volume's UUID?
 
  This way we can programmatically do:
 
  mount /dev/disk/by-label/UUID /mnt/point
 
  This is more or less a functional requirement for our provisioning
 service, and I'm wondering also:
 
  - Is anyone else doing this already?
  - Has this been considered before?
 
  We will gladly implement this and submit a patch against Cinder or Nova.
 We'd just like to make sure we're heading in the right direction and making
 the change in the appropriate part of Openstack.
 
  Thanks,
 
  Greg Poirier
  Opower - Systems Engineering
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The virtio-uuid method Vish described has worked pretty well for me so
hopefully that will work for you.  I also don't like the idea of doing a
parition/format on attach in compute, seems like an easy path to
inadvertently loosing your data.

If you still want to look at adding the partition/format functionality to
Cinder it's an interesting idea, but to be honest I've discounted it in the
past because it just seemed safer and more flexible to leave it to the
instance rather than trying to cover all of the possible partition schemes
and FS types etc.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread John Griffith
On Mon, Aug 12, 2013 at 10:52 AM, Fox, Kevin M kevin@pnnl.gov wrote:

 It may make the dependency tree a bit weird but Cinder could use Nova to
 do the actual work. Make a bare minimum image that Cinder fires up under
 Nova, attaches the volumes, and then does the partitioning/formatting. Once
 setup, the vm can be terminated. This has the benefit of reusing a lot of
 code in Cinder and Nova. It also would provide a lot of protection from
 dangerous code like formatting from being able to see disks not intended to
 be formatted. The API would live under Cinder as the Nova stuff would
 simply be an implementation detail the user need not know about.

 Thanks,
 Kevin


There have been a number of things folks have talked about implementing
worker instances in Cinder for.  What you're describing would be one of
them.  To be honest though I've never been crazy about the idea of
introducing a Nova dependency in Cinder like that.  Just doesn't seem to me
that in most cases the extra complexity has that great of a return but I
could be wrong.



 
 From: Greg Poirier [greg.poir...@opower.com]
 Sent: Monday, August 12, 2013 9:37 AM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Extension to volume creation (filesystem and
   label)

 On Mon, Aug 12, 2013 at 9:18 AM, John Griffith 
 john.griff...@solidfire.commailto:john.griff...@solidfire.com wrote:

 On Mon, Aug 12, 2013 at 9:15 AM, Vishvananda Ishaya vishvana...@gmail.com
 mailto:vishvana...@gmail.com wrote:
 This would need to happen on the cinder side on creation. I don't think it
 is safe for nova to be modifying the contents of the volume on attach. That
 said nova does currently set the serial number on attach (for libvirt at
 least) so the volume will show up as:

 /dev/disk/by-id/virtio-uuid

 Although the uuid gets truncated.

 Vish

 I missed this in my first passthrough. Thanks for pointing that out.

 We still like the idea of creating the filesystem (to make block storage
 truly self-service for developers), but we might be able to work around
 that. It seems that my initial feeling that this would be dealt with in
 Cinder was correct, though.

 The virtio-uuid method Vish described has worked pretty well for me so
 hopefully that will work for you.  I also don't like the idea of doing a
 parition/format on attach in compute, seems like an easy path to
 inadvertently loosing your data.

 We could track the state of the filesystem somewhere in the Cinder model.
 Only try to initialize it once.

 If you still want to look at adding the partition/format functionality to
 Cinder it's an interesting idea, but to be honest I've discounted it in the
 past because it just seemed safer and more flexible to leave it to the
 instance rather than trying to cover all of the possible partition schemes
 and FS types etc.

 Oh, we don't want to get super fancy with it. We would probably only
 support one filesystem type and not partitions. E.g. You request a 120GB
 volume and you get a 120GB Ext4 FS mountable by label.

 It may potentially not be worth the effort, ultimately. We'll have to
 continue our discussions internally... particularly since now I know where
 a useful identifier for the volume is under the dev fs.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Dev] Block Storage libraries and shared code

2013-08-12 Thread John Griffith
On Mon, Aug 12, 2013 at 1:06 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/12/2013 02:56 PM, Vishvananda Ishaya wrote:
 
  On Aug 12, 2013, at 8:55 AM, John Griffith john.griff...@solidfire.com
  mailto:john.griff...@solidfire.com wrote:
 
  Hey,
 
  There have been a couple of block storage related patches in Nova
  lately and I wanted to get some discussion going and also maybe
  increase some awareness on some efforts that were discussed at the
  last summit.  To catch up a bit here's the etherpad from the summit
  session [1].
 
  First off, there was a patch to move Nova's LVM code in to OSLO (here
  [2]).  This one is probably my fault for not having enough awareness
  out there regarding our plans/goals with brick.  I'd like to hear from
  folks if the brick approach is not sufficient or if there's some other
  reason that it's not desirable (hopefully it's just that folks didn't
  know about it).
 
  For reference/review the latest version of the brick/local_dev/lvm
  code is here: [4].
 
  One question we haven't answered on this yet is where this code should
  ultimately live.  Should it be in OSLO, or should it be a separate
  library that's part of Cinder and can be imported by other projects.
   I'm mixed on this for a number of reasons but I think either approach
  is fine.
 
  The next item around this topic that came up was a patch to add
  support for using RBD for local volumes in Nova (here [3]).  You'll
  notice a number of folks mentioned brick on this, and I think that's
  the correct answer.  At the same time while I think that's the right
  answer long term I also would hate to see this feature NOT go in to H
  just because folks weren't aware of what was going on in Brick.  It's
  a bit late in the cycle so my thought on this is that I'd like to see
  this resubmitted using the brick/common approach.  If that can't be
  done between now and the feature freeze for H3 I'd rather see the
  patch go in as is than have the feature not be present at all for
  another release.  We can then address this when we get a better story
  in place for brick.
 
  It seems like the key question is whether or not the nova code is going
  to be replaced by brick by Havana. If not, then this should go in as-is.

 +1.  I was still expecting that it was.  If not, I'm happy to go with this.

 What's the status on this work?

 https://blueprints.launchpad.net/nova/+spec/refactor-iscsi-fc-brick

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


It's still planned to go in (hopefully in the next day or two [at least the
nova submission should be up]).  There are a couple of fixes under review
on the Cinder side right now and a Nova patch is ready to go once those
merge.  I'll see if we can't get the Nova version uploaded today at least
as a WIP pending the fixes in progress on the Cinder side.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   >