[openstack-dev] [trove] next two weekly meetings

2013-12-18 Thread Michael Basnight
We are canceling our next two weekly meetings. They occur on Dec 25 and Jan
1. See you all on the 8th for our next regularly scheduled trove meeting.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-18 Thread Michael Basnight
I think this is a good idea and I support it. In todays meeting [1] there
were some questions, and I encourage them to get brought up here. My only
question is in regard to the tail of a file we discussed in irc. After
talking about it w/ other trovesters, I think it doesnt make sense to tail
the log for most datstores. I cant imagine finding anything useful in say,
a java, applications last 100 lines (especially if a stack trace was
present). But I dont want to derail, so lets try to focus on the deliver
to swift first option.

[1]
http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon dmako...@mirantis.comwrote:

 Greetings, OpenStack DBaaS community.

 I'd like to start discussion around a new feature in Trove. The
 feature I would like to propose covers manipulating  database log files.


 Main idea. Give user an ability to retrieve database log file for any
 purposes.

 Goals to achieve. Suppose we have an application (binary application,
 without source code) which requires a DB connection to perform data
 manipulations and a user would like to perform development, debbuging of an
 application, also logs would be useful for audit process. Trove itself
 provides access only for CRUD operations inside of database, so the user
 cannot access the instance directly and analyze its log files. Therefore,
 Trove should be able to provide ways to allow a user to download the
 database log for analysis.


 Log manipulations are designed to let user perform log investigations.
 Since Trove is a PaaS - level project, its user cannot interact with the
 compute instance directly, only with database through the provided API
 (database operations).

 I would like to propose the following API operations:

1.

Create DBLog entries.
2.

Delete DBLog entries.
3.

List DBLog entries.

 Possible API, models, server, and guest configurations are described at
 wiki page. [1]

 [1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Michael Basnight
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

2013-12-27 Thread Michael Basnight
Howdy,

Im proposing Auston McReynolds (amcrn) to trove-core.

Auston has been working with trove for a while now. He is a great reviewer.
He is incredibly thorough, and has caught more than one critical error with
reviews and helps connect large features that may overlap (config edits +
multi datastores comes to mind). The code he submits is top notch, and we
frequently ask for his opinion on architecture / feature / design.

https://review.openstack.org/#/dashboard/8214
https://review.openstack.org/#/q/owner:8214,n,z
https://review.openstack.org/#/q/reviewer:8214,n,z

Please respond with +1/-1, or any further comments.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

2014-01-02 Thread Michael Basnight
Hi Illya,

I greatly appreciate what Denis is providing to the Trove community. Thank
you for letting him devote his time to trove. Denis and I have spoken
privately at length about core (most recently a few weeks ago), and I
believe he has a good idea of how to grow into a core member. Please have a
conversation and ask him what myself and the other core team members have
said to him privately.

Im do not think a forum like this is a good place to discuss why people who
werent yet nominated _arent_ being nominated yet. If you have concerns,
please bring it up to the TC and myself.


On Thu, Jan 2, 2014 at 4:30 AM, Ilya Sviridov isviri...@mirantis.comwrote:

 Hello Trove team
 Hello Michael

 I believe that Auston does a great job and personally think that his
 reviews are always thorough and reasonable.
 But It is surprising to not to see Denis Makogon (dmakogon,
 denis_makogon) as candidate to cores.

 He is well known active community player driving HEAT integration and
 Cassandra support in Trove. He takes part in all technical discussions
 and infrastructure as well.
 Also Denis actively helps to newcomers and always responsive in IRC chat.

 Just looking at his code review statistic [1] [2] [3] and trove weekly
 meeting participation [4] I astonished why there is no his name in the
 first email.

 [1] http://www.russellbryant.net/openstack-stats/trove-reviewers-180.txt
 [2] http://www.russellbryant.net/openstack-stats/trove-reviewers-90.txt
 [3] http://www.russellbryant.net/openstack-stats/trove-reviewers-30.txt
 [4] http://eavesdrop.openstack.org/meetings/trove/2013/

 With best regards,
 Ilya Sviridov



 On Tue, Dec 31, 2013 at 3:34 PM, Paul Marshall
 paul.marsh...@rackspace.com wrote:
  +1
 
  On Dec 30, 2013, at 1:13 PM, Vipul Sabhaya vip...@gmail.com wrote:
 
  +1
 
  Sent from my iPhone
 
  On Dec 30, 2013, at 10:50 AM, Craig Vyvial cp16...@gmail.com wrote:
 
  +1
 
 
  On Mon, Dec 30, 2013 at 12:00 PM, Greg Hill greg.h...@rackspace.com
 wrote:
  +1
 
  On Dec 27, 2013, at 4:48 PM, Michael Basnight mbasni...@gmail.com
 wrote:
 
  Howdy,
 
  Im proposing Auston McReynolds (amcrn) to trove-core.
 
  Auston has been working with trove for a while now. He is a great
 reviewer. He is incredibly thorough, and has caught more than one critical
 error with reviews and helps connect large features that may overlap
 (config edits + multi datastores comes to mind). The code he submits is top
 notch, and we frequently ask for his opinion on architecture / feature /
 design.
 
  https://review.openstack.org/#/dashboard/8214
  https://review.openstack.org/#/q/owner:8214,n,z
  https://review.openstack.org/#/q/reviewer:8214,n,z
 
  Please respond with +1/-1, or any further comments.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Michael Basnight
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] mid cycle meetup

2014-01-03 Thread Michael Basnight
Howdy,

In the spirit of getting together to get stuff done, Trove is having a mid
cycle meetup in Austin, TX. We are having it Wednesday, February 19, 2014
to Friday, February 21, 2014, in downtown Austin at the Capital Factory,
which is a hip startup incubator. See [1] for more information.

The agenda is still somewhat up in the air, but we will be covering
clustering, trove tempest testing, and heat integration at present. To me,
this will be focused on part design + part implementation of specific
features, so if you have an agenda item please bring it up! Ill be
constructing a loosy goosy agenda in the coming weeks.

[1] https://wiki.openstack.org/wiki/Trove/IcehouseCycleMeetup
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

2014-01-06 Thread Michael Basnight
 On Dec 27, 2013, at 2:48 PM, Michael Basnight mbasni...@gmail.com wrote:
 
 Howdy,
 
 Im proposing Auston McReynolds (amcrn) to trove-core.

The people have spoken. Welcome to trove core, Auston. 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] how to list available configuration parameters for datastores

2014-01-22 Thread Michael Basnight
On Jan 22, 2014, at 10:19 AM, Kaleb Pomeroy wrote:

 My thoughts so far: 
 
 /datastores/datastore/configuration/parameters (Option Three)
 + configuration set without an associated datastore is meaningless
 + a configuration set must be associated to exactly one datastore
 + each datastore must have 0-1 configuration set
 + All above relationships are immediately apparent 
 - Listing all configuration sets becomes more difficult (which I don't think 
 that is a valid concern)

+1 to option 3, given what kaleb and craig have outlined so far. I dont see the 
above minus as a valid concern either, kaleb.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Code proposal deadline for Icehouse

2014-01-23 Thread Michael Basnight

On Jan 23, 2014, at 5:10 PM, Mark McClain wrote:

 
 On Jan 23, 2014, at 5:02 PM, Russell Bryant rbry...@redhat.com wrote:
 
 Greetings,
 
 Last cycle we had A feature proposal deadline across some projects.
 This was the date that code associated with blueprints had to be posted
 for review to make the release.  This was in advance of the official
 feature freeze (merge deadline).
 
 Last time this deadline was used by 5 projects across 3 different dates [1].
 
 I would like to add a deadline for this again for Nova.  I'm thinking 2
 weeks ahead of the feature freeze right now, which would be February 18th.
 
 I'm wondering if it's worth coordinating on this so the schedule is less
 confusing.  Thoughts on picking a single date?  How's Feb 18?
 
 I like the idea of selecting a single date. Feb 18th fits with the timeline 
 the Neutron team has used in the past.

So, Feb 19~21 is the trove mid cycle sprint, which means we might push last 
minute finishing touches on things during those 3 days. Id prefer the next week 
of feb if at all possible. Otherwise im ok w/ FFE's and such if im in the 
minority, because i do think a single date would be best for everyone. 

So, +0 from trove. :D


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Backup/Restore encryption/decryption issue

2014-02-11 Thread Michael Basnight
Denis Makogon dmako...@mirantis.com writes:

 Goodday, OpenStack DВaaS community.


 I'd like to start conversation about guestagent security issue related
 to backup/restore process. Trove guestagent service uses AES with 256 bit
 key (in CBC mode) [1] to encrypt backups which are stored at predefined
 Swift container.

 As you can see, password is defined in config file [2]. And here comes
 problem, this password is used for all tenants/projects that use Trove - it
 is a security issue. I would like to suggest Key derivation function [3]
 based on static attributes specific for each tenant/project (tenant_id).
 KDF would be based upon python implementation of PBKDF2 [4]. Implementation
 can be seen here [5].

I do not want to see us writing our own crypto code in Trove. Id much
rather us use barbican for this, assuming it fits the bill. Lets do some
research on barbican before we go write this all.


 Also i'm looking forward to give user an ability to pass password for
 KDF that would deliver key for backup/restore encryption/decryption, if
 ingress password (from user) will be empty, guest will use static
 attributes of tenant (tenant_id).

 To allow backward compatibility, python-troveclient should be able to pass
 old password [1] to guestagent as one of parameters on restore call.

 Blueprint already have been registered in Trove launchpad space, [6].

 I also foresee porting this feature to oslo-crypt, as part of security
 framework (oslo.crypto) extensions.

Again, id rather see us use barbican for this instead of creating oslo-crypt.


 Thoughts ?

 [1]
 https://github.com/openstack/trove/blob/master/trove/guestagent/strategies/backup/base.py#L113-L116
 [2]
 https://github.com/openstack/trove/blob/master/etc/trove/trove-guestagent.conf.sample#L69
 [3] http://en.wikipedia.org/wiki/Key_derivation_function
 [4] http://en.wikipedia.org/wiki/PBKDF2
 [5] https://gist.github.com/denismakogon/8823279
 [6] https://blueprints.launchpad.net/trove/+spec/backup-encryption

 Best regards,
 Denis Makogon
 Mirantis, Inc.
 Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru
 dmako...@mirantis.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgp5tp7rQlTnh.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Replication Contract Verbiage

2014-02-11 Thread Michael Basnight
Daniel Salinas imsplit...@gmail.com writes:

 https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API#REPLICATION

 I have updated the wiki page to reflect the current proposal for
 replication verbiage with some explanation of the choices.  I would like to
 open discussion here regarding that verbiage.  Without completely
 duplicating everything I just wrote in the wiki here are the proposed words
 that could be used to describe replication between two datastore instances
 of the same type.  Please take a moment to consider them and let me know
 what you think.  I welcome all feedback.

 replicates_from:  This term will be used in an instance that is a slave of
 another instance. It is a clear indicator that it is a slave of another
 instance.

 replicates_to: This term will be used in an instance that has slaves of
 itself. It is a clear indicator that it is a master of one or more
 instances.

Nice work daniel. I think these are quite sane. They are pretty agnostic
to the datastore type. The only thing i remember Stewart Smith saying
was that these may not _both_ _always_ apply to all datastores. So
assuming we have a builtin way to say that a given datastore/replication
type may not support both of these (or may not have a need to expose it
like this).

 writable: This term will be used in an instance to indicate whether it is
 intended to be used for writes. As replication is used commonly to scale
 read operations it is very common to have a read-only slave in many
 datastore types. It is beneficial to the user to be able to see this
 information when viewing the instance details via the api.

Sounds reasonable. But how do we view a multi-tier slave? aka, a slave
to a slave. Is it both read only and writale, so to speak, depending on
where you are in the cluster hierarchy?

 The intention here is to:
 1.  have a clearly defined replication contract between instances.
 2.  allow users to create a topology map simply by querying the api for
 details of instances linked in the replication contracts
 3.  allow the greatest level of flexibility for users when replicating
 their data so that Trove doesn't prescribe how they should make use of
 replication.

 I also think there is value in documenting common replication topologies
 per datastore type with example replication contracts and/or steps to
 recreate them in our api documentation.  There are currently no examples of
 this yet

++

 e.g. To create multi-master replication in mysql...

 As previously stated I welcome all feedback and would love input.

 Regards,

 Daniel Salinas
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgpSmTHKNCxrc.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [TROVE] Manual Installation: problems with trove-manage

2014-02-13 Thread Michael Basnight
Giuseppe Galeota giuseppegale...@gmail.com writes:

 Hi Michael,
 I'm using this unique guide: 
 http://docs.openstack.org/developer/trove/dev/manual_install.html;.

Thats developer guide uses virtualenv, but its by no means necessary.

 Can you help me with a more useful guide that makes trove working?

Id love to help you out. #openstack-trove has a bunch of people who have
installed trove for dev and prod use, so lets make the document better!


pgpDPev78jvpw.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TROVE] Trove capabilities

2014-03-07 Thread Michael Basnight
Denis Makogon dmako...@mirantis.com writes:

 Let me elaborate a bit.

 [...]

 Without storing static API contract capabilities gonna be 100% corrupted,
 because there's no API description inside Trove.

Im not sure i understand what you mean here.

 About reloading, file should not be reloaded, file is a static unmodifiable
 template. As the response users sees the combinition of described contract
 (from file) and all capabilities inside database table (as i described in
 my first emai). To simplify file usage we could deserialize it into dict
 object and make it as singleton to use less memory.

Im not a fan, at all, of a file based config for this. It will be
exposed to users, and can be modeled in the database just fine.

 About Kalebs design and whole appoach. Without blocking/allowing certain
 capabilities for datastores whole approach is nothing else than help
 document and this is not acceptable.

Incorrect. Its a great first approach. We can use this, and the entries
in the db, to conditionally expose things in the extensions.

 Approach of capabilities is: In runtime block/allow tasks to be executed
 over certain objects. That why i stand on re-reviewing whole design from
 scratch with all contributors.

We can do this _using_ kalebs work in the future, when we refactor
extensions. Weve had this conversation already, a few months ago when
the redis guys first came aboard. re-review is not needed. Kalebs stuff
is completely valid, and anything you are talking about can be used by
the user facing capabilities.

ON A SEPARATE NOTE

This is one of the hardest message threads ive ever read.  If you guys
(everyone on the thread, im talking to you) dont use bottom posting, and
quote the sections you are answering, then its nearly impossible to
follow. Plz follow the conventions the rest of the OpenStack community
uses. So if there is more to discuss, lets continue, using above as a
proper way to do it.


pgpWXbIs9UV0C.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PTL Candidacy

2014-03-31 Thread Michael Basnight
Howdy Trovesters and co,

I would like to announce that i will _not_ be running for the Trove PTL
this cycle. We have some smart peoples who can step up and keep the
momentum going.

Its been a wild ride going into integration, and I feel like its someone
else's turn to have the fun that I have had over the last 6mo (and before
when Trove was a wee little RedDwarf). Thanks to the other PTLs for helping
me shape my PTL role into something tangible. And of course a special shout
out to ttx for helping to keep me focused :) I will be still be working on
Trove full time.

-- 
Michael Basnight
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Michael Basnight
Top posting…

Id like to see these in the tempest tests. Im just getting started integrating 
trove into tempest for testing, and there are some prerequisites that im 
working thru with the infra team. Progress is being made though. Id rather not 
see them go into 2 different test suites if we can just get them into the 
tempest tests. Lets hope the stars line up so that you can start testing in 
tempest. :)

On Oct 21, 2013, at 9:25 AM, Illia Khudoshyn wrote:

 Hi Tim,
 
 Thanks for a quick reply. I'll go with updating run_tests.py for now. Hope, 
 Andrey Shestakov's changes arrive soon.
 
 Best wishes.
 
 
 
 On Mon, Oct 21, 2013 at 7:01 PM, Tim Simpson tim.simp...@rackspace.com 
 wrote:
 Hi Illia,
 
 You're correct; until the work on establishing datastore types and versions 
 as a first class Trove concept is finished, which will hopefully be soon (see 
 Andrey Shestakov's pull request), testing non-MySQL datastore types will be 
 problematic.
 
 A short term, fake-mode only solution could be accomplished fairly quickly as 
 follows: run the fake mode tests a third time in Tox with a new configuration 
 which allows for MongoDB. 
 
 If you look at tox.ini, you'll see that the integration tests run in fake 
 mode twice already:
 
  {envpython} run_tests.py
  {envpython} run_tests.py --test-config=etc/tests/xml.localhost.test.conf
 
 The second invocation causes the trove-client to be used in XML mode, 
 effectively testing the XML client. 
 
 (Tangent: currently running the tests twice takes some time, even in fake 
 mode- however it will cost far less time once the following pull request is 
 merged: https://review.openstack.org/#/c/52490/)
 
 If you look at run_tests.py, you'll see that on line 104 it accepts a trove 
 config file. If the run_tests.py script is updated to allow this value to be 
 specified optionally via the command line, you could create a variation on 
 etc/trove/trove.conf.test which specifies MongoDB. You'd then invoke 
 run_tests.py with a --group= argument to run some subset of the tests 
 support by the current Mongo DB code in fake mode.
 
 Of course, this will do nothing to test the guest agent changes or confirm 
 that the end to end system actually works, but it could help test a lot of 
 incidental API and infrastructure database code.
 
 As for real mode tests, I think we should wait until the datastore type / 
 version code is finished, at which point I know we'll all be eager to add 
 additional tests for these new datastores. Of course in the short term it 
 should be possible for you to  change the code locally to build a Mongo DB 
 image as well as a Trove config file to support this and then just run some 
 subset of tests that works with Mongo.
 
 Thanks,
 
 Tim 
 
 
 From: Illia Khudoshyn [ikhudos...@mirantis.com]
 Sent: Monday, October 21, 2013 9:42 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Trove] Testing of new service types support
 
 Hi all,
 
 I've done with implementing the very first bits of MongoDB support in Trove 
 along with unit tests and faced an issue with proper testing of it. 
 
 It is well known that right now only one service type per installation is 
 supported by Trove (it is set in config). All testing infrastructure, 
 including Trove-integration codebase and jenkins jobs, seem to rely on that 
 service type as well. So it seems to be impossible to run all existing tests 
 AND some additional tests for MongoDB service type in one pass, at least 
 until Trove client will allow to pass service type (I know that there is 
 ongoing work in this area).
 
 Please note, that all of the above is about functional and intergation 
 testing -- there is no issues with unit tests.
 
 So the question is, should I first submit the code to Trove and then proceed 
 with updating Trove-integration or just put aside all that MongoDB stuff 
 until client (and -integration) will be ready?
 
 PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?) support 
 to Trove. These guys will likely face this issue as well.
 
 -- 
 Best regards,
 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.
  
 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru
  
 Skype: gluke_work
 ikhudos...@mirantis.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best regards,
 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.
  
 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru
  
 Skype: gluke_work
 ikhudos...@mirantis.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list

Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 10:02 AM, Tim Simpson wrote:

 Can't we say that about nearly any feature though? In theory we could put a 
 hold on any tests for feature work saying it 
 will need to be redone when Tempest integrated is finished.
 
 Keep in mind what I'm suggesting here is a fairly trivial change to get some 
 validation via the existing fake mode / integration tests at a fairly small 
 cost.

Of course we can do the old tests. And for this it might be the best thing. The 
problem i see is that we cant do real integration tests w/o this work, and i 
dont want to integrate a bunch of different service_types w/o tests that 
actually spin them up and run the guest, which is where 80% of the new code 
lives for a new service_type. Otherwise we are running fake-guest stuff that is 
not a good representation. 

For the api stuff, sure thats fine. i just think the overall coverage of the 
review will be quite low if we are only testing the API via fake code.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Michael Basnight

On Oct 18, 2013, at 12:30 PM, Tim Simpson wrote:

 1. I think since we have two fields in the instance object we should make a 
 new object for datastore and avoid the name prefixing, like this:

I agree with this.

 2. I also think a datastore_version alone should be sufficient since the 
 associated datastore type will be implied:

When i brought this up it was generally discussed as being confusing. Id like 
to use type and rely on having a default (or active) version behind the scenes.

 3. Additionally, while a datastore_type should have an ID in the Trove 
 infastructure database, it should also be possible to pass just the name of 
 the datastore type to the instance call, such as mysql or mongo. Maybe we 
 could allow this in addition to the ID? I think this form should actually use 
 the argument type, and the id should then be passed as type_id instead.

Id prefer this honestly.

 4. Additionally, in the current pull request to implement this it is possible 
 to avoid passing a version, but only if no more than one version of the 
 datastore_type exists in the database. 
 
 I think instead the datastore_type row in the database should also have a 
 default_version_id property, that an operator could update to the most 
 recent version or whatever other criteria they wish to use, meaning the call 
 could become this simple:

Since we have determined from this email thread that we have an active status, 
and that  1 version can be active, we have to think about the precedence of 
active vs default. My question would be, if we have a default_version_id and a 
active version, what do we choose on behalf of the user? If there is  1 active 
version and a user does not specify the version, the api will error out, unless 
a default is defined. We also need a default_type in the config so the existing 
APIs can maintain compatibility. We can re-discuss this for v2 of the API.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 1:40 PM, Tim Simpson wrote:

 2. I also think a datastore_version alone should be sufficient since the 
 associated datastore type will be implied:
 
 When i brought this up it was generally discussed as being confusing. Id 
 like to use type and rely on having a default (or active) version behind the 
 scenes.
 
 Can't we do both? If a user wants a specific version, most likely they had to 
 enumerate all datastore_versions, spot it in a list, and grab the guid. Why 
 force them to also specify the datastore_type when we can easily determine 
 what that is?

Fair enough.

 
 4. Additionally, in the current pull request to implement this it is 
 possible to avoid passing a version, but only if no more than one version 
 of the datastore_type exists in the database.
 
 I think instead the datastore_type row in the database should also have a 
 default_version_id property, that an operator could update to the most 
 recent version or whatever other criteria they wish to use, meaning the 
 call could become this simple:
 
 Since we have determined from this email thread that we have an active 
 status, and that  1 version can be active, we have to think about the 
 precedence of active vs default. My question would be, if we have a 
 default_version_id and a active version, what do we choose on behalf of the 
 user? If there is  1 active version and a user does not specify the 
 version, the api will error out, unless a default is defined. We also need a 
 default_type in the config so the existing APIs can maintain compatibility. 
 We can re-discuss this for v2 of the API.
 
 Imagine that an operator sets up Trove and only has one active version. They 
 then somehow fumble setting up the default_version, but think they succeeded 
 as the API works for users the way they expect anyway. Then they go to add 
 another active version and suddenly their users get error messages.
 
 If we only use the default_version field of the datastore_type to define a 
 default would honor the principle of least surprise.

Are you saying you must have a default version defined to have  1 active 
versions?


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 1:57 PM, Nikhil Manchanda wrote:

 
 The image approach works fine if Trove only supports deploying a single
 datastore type (mysql in your case). As soon as we support
 deploying more than 1 datastore type, Trove needs to have some knowledge
 of which guestagent manager classes to load. Hence the need
 for having a datastore type API.
 
 The argument for needing to keep track of the version is
 similar. Potentially a version increment -- especially of the major
 version -- may require for a different guestagent manager. And Trove
 needs to have this information.

This is also true that we dont want to define the _need_ to have custom images 
for the datastores. You can, quite easily, deploy mysql or redis on a vanilla 
image.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 1:55 PM, Mark McLoughlin wrote:

 On Tue, 2013-10-22 at 01:45 +0800, Thomas Goirand wrote:
 On 10/20/2013 09:00 PM, Jeremy Stanley wrote:
 On 2013-10-20 22:20:25 +1300 (+1300), Robert Collins wrote:
 [...]
 OTOH registering one's nominated copyright holder on the first
 patch to a repository is probably a sustainable overhead. And it's
 probably amenable to automation - a commit hook could do it locally
 and a check job can assert that it's done.
 
 I know the Foundation's got work underway to improve the affiliate
 map from the member database, so it might be possible to have some
 sort of automated job which proposes changes to a copyright holders
 list in each project by running a query with the author and date of
 each commit looking for new affiliations. That seems like it would
 be hacky, fragile and inaccurate, but probably still more reliable
 than expecting thousands of contributors to keep that information up
 to date when submitting patches?
 
 My request wasn't to go *THAT* far. The main problem I was facing was
 that troveclient has a few files stating that HP was the sole copyright
 holder, when it clearly was not (since I have discussed a bit with some
 the dev team in Portland, IIRC some of them are from Rackspace...).
 
 Talk to the Trove developers and politely ask them whether the copyright
 notices in their code reflects what they see as the reality.
 
 I'm sure it would help them if you pointed out to them some significant
 chunks of code from the commit history which don't appear to have been
 written by a HP employee.
 
 Simply adding a Rackspace copyright notice to a file or two which has
 had a significant contribution by someone from Rackspace would be enough
 to resolve your concerns completely.
 
 i.e. if you spot in inaccuracy in the copyright headers, just make it
 easy for us people to fix it and I'm sure they will.

++ to this. Id like to do what is best for OpenStack, but i dont want to make 
it impossible for the debian ftp masters to approve trove :) so if this is 
sufficient, ill fix the copyright headers.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 5:09 PM, Monty Taylor wrote:
 On 10/21/2013 10:44 PM, Clint Byrum wrote:
 Excerpts from Mark McLoughlin's message of 2013-10-21 13:45:21 -0700:
 On Mon, 2013-10-21 at 10:28 -0700, Clint Byrum wrote:
 Excerpts from Robert Collins's message of 2013-10-20 02:25:43 -0700:
 On 20 October 2013 02:35, Monty Taylor mord...@inaugust.com wrote:
 
 However, even as a strong supporter of accurate license headers, I would
 like to know more about the FTP masters issue. I dialog with them, as
 folks who deal with this issue and its repercutions WAY more than any of
 us might be really nice.
 
 Debian takes it's responsibilities under copyright law very seriously.
 The integrity of the debian/copyright metadata is checked on the first
 upload for a package (and basically not thereafter, which is either
 convenient or pragmatic or a massive hole in rigour depending on your
 point of view. The goal is to ensure that a) the package is in the
 right repository in Debian (main vs nonfree) and b) that Debian can
 redistribute it and c) that downstreams of Debian who decide to use
 the package can confidently do so. Files with differing redistribution
 licenses that aren't captured in debian/copyright are an issue for c);
 files with different authors and the same redistribution licence
 aren't a problem for a/b/c *but* the rules the FTP masters enforce
 don't make that discrimination: the debian/copyright file needs to be
 a concordance of both copyright holders and copyright license.
 
 Personally, I think it should really only be a concordance of
 copyright licenses, and the holders shouldn't be mentioned, but thats
 not the current project view.
 
 
 The benefit to this is that by at least hunting down project leadership
 and getting an assertion and information about the copyright holder
 situation, a maintainer tends to improve clarity upstream.
 
 By improve clarity, you mean compile an accurate list of all
 copyright holders? Why is this useful information?
 
 Sure, we could also improve clarity by compiling a list of all the
 cities in the world where some OpenStack code has been authored ... but
 *why*?
 
 
 If you don't know who the copyright holders are, you cannot know that
 the license being granted is actually enforceable. What if the Trove
 developers just found some repo lying out in the world and slapped an
 Apache license on it? We aren't going to do an ehaustive investigation,
 but we want to know _who_ granted said license.
 
 You know I think you're great, but this argument doesn't hold up.
 
 If the trove developers found some repo in the world and slapped an
 apache license AND said:
 
 Copyright 2012 Michael Basnight
 
 in the header, and Thomas put that in debian/copyright, the Debian FTP
 masters would very happily accept it.

I endorse this message. 

But seriously, the Trove team will take some time tomorrow and add copyrights 
to the files appropriately. Then ill be sure to ping zigo.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-22 Thread Michael Basnight

On Oct 22, 2013, at 9:34 AM, Tim Simpson wrote:

  It's not intuitive to the User, if they are specifying a version alone.  
  You don't boot a 'version' of something, with specifying what that some 
  thing is.  I would rather they only specified the datastore_type alone, and 
  not have them specify a version at all.
 
 I agree for most users just selecting the datastore_type would be most 
 intutive. 
 
 However, when they specify a version it's going to a be GUID which they could 
 only possibly know if they have recently enumerated all versions and thus 
 *know* the version is for the given type they want. In that case I don't 
 think most users would appreciate having to also pass the type- it would just 
 be redundant. So in that case why not make it optional?]

im ok w/ making either optional if the criteria for selecting the _other_ is 
not ambiguous. 


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-22 Thread Michael Basnight
Top posting cuz im a baller. We will get this fixed today. PS clint i like the 
way you think ;)

https://review.openstack.org/#/c/53176/

On Oct 22, 2013, at 9:21 AM, Clint Byrum wrote:

 Excerpts from Monty Taylor's message of 2013-10-21 17:09:41 -0700:
 
 On 10/21/2013 10:44 PM, Clint Byrum wrote:
 Excerpts from Mark McLoughlin's message of 2013-10-21 13:45:21 -0700:
 
 If you don't know who the copyright holders are, you cannot know that
 the license being granted is actually enforceable. What if the Trove
 developers just found some repo lying out in the world and slapped an
 Apache license on it? We aren't going to do an ehaustive investigation,
 but we want to know _who_ granted said license.
 
 You know I think you're great, but this argument doesn't hold up.
 
 If the trove developers found some repo in the world and slapped an
 apache license AND said:
 
 Copyright 2012 Michael Basnight
 
 in the header, and Thomas put that in debian/copyright, the Debian FTP
 masters would very happily accept it.
 
 
 The copyright header is a data point. Now somebody looking to vet the
 license situation can go and contact Michael Basnight, and look at the
 history of the code itself. They can validate that Michael Basnight was
 an early author, made announcements, isn't a habitual code stealer, etc.
 
 Is this correct? No, but it gives someone looking to do due diligence
 confirmation that Michael had the right to license the code.
 
 No headers, and no information anywhere just makes an investigation that
 much harder.
 
 So it is just a data point for auditing. The problem, which Robert
 Collins alluded to, is that nobody is actually auditing things this way.
 
 This is something to bring up in Debian. I think I'll work off list with
 Thomas to draft something for Debian which proposes a clarification
 or relaxation of the copyright holder interpretation of Debian policy
 currently adopted by the FTP masters.
 
 I think that authors should attribute their work, because I think that
 they should care. However, if they don't, that's fine. There is SOME
 attribution in the file, and that attribution itself is correct. HP did
 write some of the file. Rackspace also did but did not bother to claim
 having done so.
 
 debian/copyright should reflect what's in the files - it's what the
 project is stating through the mechanisms that we have available to us.
 I appreciate Thomas trying to be more precise here, but I think it's
 actually too far. If you think that there is a bug in the copyright
 header, you need to contact the project, via email, bug or patch, and
 fix it. At THAT point, you can fix the debian/copyright file.
 
 Until then, you need to declare to Debian what we are declaring to you.
 
 
 Indeed, this hasn't come up, presumably, because the other
 debian/copyright files have done just that. That is definitely the path of
 least resistance, and the one I have taken. This is not trivial either,
 As somebody who made a feeble attempt at documenting the copyright
 holders for MySQL (all of you reading this have no idea how hard Monty
 is cackling right now), I can say that it is basically pointless to do
 anything except automatically generate from existing sources and spot
 check.
 
 
 I'm not sure that was me, but I would object to conflating it, yes. They
 are not the same thing, but they are related. Only a copyright holder
 can grant a copyright license.
 
 Listing the holders in debian/copyright does not prove that the asserted
 holder is a valid holder. It only asserts that _someone_ has asserted
 that copyright.
 
 It means that, should someone sue you for copyright infringement, there
 is someone you can go to for clarification.
 
 
 That sounds pretty valuable to me. Imagine Debian has some big
 corporation sending them cease and desist letters and threats of
 copyright infringement lawsuits. It would be useful to be able to
 deflect that efficiently given their limited resources.
 
 Our CLA process for new contributors is documented here:
 
  
 https://wiki.openstack.org/wiki/How_To_Contribute#Contributors_License_Agreement
 
 The key thing for Debian to understand is that all OpenStack
 contributors agree to license their code under the terms of the Apache
 License. I don't see why a list of copyright holders would clarify the
 licensing situation any further.
 
 
 So Debian has a rule that statements like these need to be delivered to
 their users along with the end-user binaries (it relates to the social
 contract and the guidelines attached to the contract.
 
 https://review.openstack.org/static/cla.html
 
 Article 2 is probably sufficient to say that it only really matters that
 all of the copyrighted material came from people who signed the CLA,
 and that the Project Manager (OpenStack Foundation) grants the license
 on the code. I assume the other CLA's have the same basic type of
 license being granted to the OpenStack Foundation.
 
 So my recommendation stands, that we can clarify

Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-22 Thread Michael Basnight

On Oct 22, 2013, at 10:35 AM, Michael Basnight wrote:

 Top posting cuz im a baller. We will get this fixed today. PS clint i like 
 the way you think ;)
 
 https://review.openstack.org/#/c/53176/
 

Now that this is merged, and there is no stable/havana for clients, Ive got a 
question. What do the package maintainers use for clients? the largest 
versioned tag? If so i can push a new version of the client for packaging.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-23 Thread Michael Basnight

On Oct 23, 2013, at 10:54 AM, Ilya Sviridov wrote:

 Besides the strategy of selecting the default behavior.
 
 Let me share with you my ideas of configuration management in Trove and how 
 the datastore concept can help with that.
 
 Initially there was only one database and all configuration was in one config 
 file. 
 With adding of new databases, heat provisioning mechanism, we are introducing 
 more options. 
 
 Not only assigning specific image_id, but custom packages, heat templates, 
 probably specific strategies of working with security groups.
 Such needs already exist because we have a lot of optional things in config, 
 and any new feature is implemented with back sight to already existing legacy 
 installations of Trove.
 
 What is  actually datastore_type + datastore_version?
 
 The model which glues all the bricks together, so let us use it for all 
 variable part of *service type* configuration.
 
 from current config file
 
 # Trove DNS
 trove_dns_support = False
 
 # Trove Security Groups for Instances
 trove_security_groups_support = True
 trove_security_groups_rules_support = False
 trove_security_group_rule_protocol = tcp
 trove_security_group_rule_port = 3306
 trove_security_group_rule_cidr = 0.0.0.0/0
 
 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample
 #cloudinit_location = /etc/trove/cloudinit
 
 block_device_mapping = vdb
 device_path = /dev/vdb
 mount_point = /var/lib/mysql
 
 All that configurations can be moved to data_strore (some defined in heat 
 templates) and be manageable by operator in case if any default behavior 
 should be changed.
 
 The trove-config becomes core functionality specific only.

Its fine for it to be in the config or the heat templates… im not sure it 
matters. what i would like to see is that specific thing to each service be in 
their own config group in the configuration.

[mysql]
mount_point=/var/lib/mysql
…
[redis]
volume_support=False
…..

and so on.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] About single entry point in trove-guestagent

2013-10-24 Thread Michael Basnight

On Oct 23, 2013, at 7:03 AM, Illia Khudoshyn wrote:

 Hi Denis, Michael, Vipul and all,
 
 I noticed a discussion in irc about adding a single entry point (sort of 
 'SuperManager') to the guestagent. Let me add my 5cent.
 
 I agree with that we would ultimately avoid code duplication. But from my 
 experience, only very small part of GA Manager can be considered as really 
 duplicated code, namely Manager#prepare(). A 'backup' part may be another 
 candidate, but I'm not yet. It may still be rather service type specific. All 
 the rest of the code was just delegating.

Yes, currently that is the case :)

 
 If we add a 'SuperManager' all we'll have -- just more delegation:
 
 1. There is no use for dynamic loading of corresponding Manager 
 implementation because there will never be more than one service type 
 supported on a concrete guest. So current implementation with configurable 
 dictionary service_type-ManagerImpl looks good for me.
 
 2. Neither the 'SuperManager' provide common interface for Manager -- due to 
 dynamic nature of python. As it has been told, trove.guestagent.api.API 
 provides list of methods with parameters we need to implement. What I'd like 
 to have is a description of types for those params as well as return types. 
 (Man, I miss static typing). All we can do for that is make sure we have 
 proper unittests with REAL values for params and returns.
 
 As for the common part of the Manager's code, I'd go for extracting that into 
 a mixin.

When we started talking about it, i mentioned to one of the rackspace trove 
developers privately we might be able to solve effectively w/ a mixin instead 
of more parent classes :) I would like to see an example of both of them. At 
the end of the day all i care about is not having more copy pasta between 
manager impls as we grow the common stuff. even if that is just a method call 
in each guest to call each bit of common code.

 
 Thanks for your attention.
 
 -- 
 Best regards,
 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.
  
 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru
  
 Skype: gluke_work
 ikhudos...@mirantis.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-11 Thread Michael Basnight
 On Nov 11, 2013, at 11:27 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 On Sun, Nov 10, 2013 at 3:50 PM, Sean Dague s...@dague.net wrote:
 Not that I know of. I've considered writing my own gerrit front end
 mail service to do just that, because I agree, the current mail volume
 and granularity is not very good. If I manage to carve time on it,
 I'll do it on stackforge. Joe Gordon took a different approach and
 wrote a front end client to mark review threads read that are past.
 
 https://github.com/jogo/gerrit-gmail
  

Joe described this to me, and it sounded hawt. Thanks for sending this out. 

From what I gather it auto marks email as read based on merges and abandons. 
Joe, are there any dependencies, workflows, insights you can share wrt this?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] weekly meeting time change

2013-11-13 Thread Michael Basnight
To accommodate a wider audience, Trove is changing our meeting time from 
Wednesday, 2000 UTC to Wednesday, 1800 UTC. As always, we are in 
#openstack-meeting-alt, and everyone is welcome to join!

How do i edit the iCal feed / google calendar on the main Meeting page?


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core

2014-05-06 Thread Michael Basnight

 On May 6, 2014, at 2:31 AM, Nikhil Manchanda nik...@manchanda.me wrote:
 
 
 Hello folks:
 
 I'm proposing to add Craig Vyvial (cp16net) to trove-core.
 
 Craig has been working with Trove for a while now. He has been a
 consistently active reviewer, and has provided insightful comments on
 numerous reviews. He has submitted quality code to multiple features in
 Trove, and most recently drove the implementation of configuration
 groups in Icehouse.
 
 https://review.openstack.org/#/q/reviewer:%22Craig+Vyvial%22,n,z
 https://review.openstack.org/#/q/owner:%22Craig+Vyvial%22,n,z
 
 Please respond with +1/-1, or any further comments.
 
 Thanks,
 Nikhil

Yes plz. +1. 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove and heat integration status

2013-07-02 Thread Michael Basnight
On Jul 2, 2013, at 8:17 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Michael Basnight's message of 2013-07-02 19:04:01 -0700:
 On Jul 2, 2013, at 3:52 PM, Clint Byrum wrote:
 
 Excerpts from Michael Basnight's message of 2013-07-02 15:17:09 -0700:
 Howdy,
 
 one of the TC requests for integration of trove was to integrate heat. 
 While this is a small task for single instance installations, when we get 
 into clustering it seems a bit more painful. Id like to submit the 
 following as a place to start the discussion for why we would/wouldnt 
 integrate heat (now). This is, in NO WAY, to say we will not integrate 
 heat. Its just a matter of timing and requirements for our 'soon to be' 
 cluster api. I am, however, targeting getting trove to work in a rpm 
 environment, as it is tied to apt currently.
 
 Hi Michael. I do think that it is very cool that Trove will be making
 use of Heat for cluster configuration.
 
 I know it really fits the bill!
 
 
 
 1) Companies who are looking at trove are not yet looking at heat, and a 
 hard dependency might stifle growth of the product initially
   • CERN
 
 I'm sure these users don't explicitly want MySQL (or whatever DB
 you use) and RabbitMQ (or whatever RPC you use) either, but they
 are plumbing, and thus things that need to be deployed in the larger
 architecture.
 
 Well sure but i also dont want to stop trove from adoption because a company 
 has not investigated heat. Rabbit and the DB are shared resources between 
 all OpenStack services. Heat and Trove are not.
 
 I do understand that. Heat has some growing up to do before it is in the
 same category as those other pieces. Please keep us in the loop where
 you need features and/or bug fixes for Heat.
 
 
 2) homogeneous LaunchConfiguration
   • a database cluster is heterogeneous
   • Our cluster configuration will need to specify different sized slaves, 
 and allow a customer to upgrade a single slaves configuration
   • heat said if this is something that has a good use case, they could 
 potentially make it happen (not sure of timeframe)
 
 There's no requirement that you use AWS::EC2::AutoScalingGroup or
 OS::Heat::InstanceGroup. In fact I find them rather cumbersome and
 limited. Since all Heat templates are just data structures (expressed
 as yaml or json) you can just maintain an array of instances of the size
 that you want.
 
 Oh good!
 
 
 3) have to modify template to scale out
   • This doable but will require hacking a template in code and pushing 
 that template
   • I assume removing a slave will require the same finagling of the 
 template
   • I understand that a better version of this is coming (not sure of 
 timeframe)
 
 The word template makes it sound like it is a text only thing. It is
 a data structure, and as such, it is quite easy to modify and maintain
 in code.
 ...
 I hope all of that makes some sense. Eventually yes, resizable arrays
 of servers will be in the new format, HOT, but for now, the CFN method
 is still useful as you get signals and dependency graph management.
 
 It does, with one caveat. Can i say slave1 has a flavor of 512m and 
 slave2 has a flavor 2048m? I didnt see that in the example. Its really 
 useful for a reporting slave to be smaller than a master, and for a 
 particular slave to be larger due to any sort of requirement that i cant 
 necessarily dictate!
 
 Of course flavor can differ per-server. That is kind of my point, the cfn
 template format is fairly low level, making Heat into sort of a really
 smart client library for all of OpenStack. So you can really maintain
 the list of slaves however you want. You could have ReportingSlave0001
 and QuerySlave0002 or just use UUID's for them and give them names
 in Metadata.
 

Great! 3. Thx again for shedding some light!!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] weekly meeting today

2013-07-03 Thread Michael Basnight
Same bat time, same bat channel. 2000 UTC in #openstack-meeting-alt

https://wiki.openstack.org/wiki/Meetings/TroveMeeting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] common codes

2013-07-15 Thread Michael Basnight
On Jul 15, 2013, at 7:22 PM, Gareth academicgar...@gmail.com wrote:

 Hi, all
 
 There are some common codes in most of projects, such as opnstack/common, db, 
 and some else (?). I know a good way is using 'import oslo' is ok, instead of 
 copy those codes here and there. And now we already have project oslo and 
 trove, but how and when do we handle old codes, remove that in next major 
 release?

From the trove perspective we are trying to keep our Oslo code updated as often 
as possible. Once the code leaves incubator status (the code copy you mention), 
we will adopt the individual libraries. I believe oslo.messaging is the next on 
our list. 

As for timeline, we try to stay current with one caveat. We stop pulling large 
updates in as milestone deadlines approach. So pull in updates early in the 
milestone, so that they are there for the milestone, and eventually the 
release. We have a review inflight waiting for the h2 cutoff so we can merge it 
[1] that has the latest oslo. This approach may very somewhat from other 
projects so ill let the PTLs chime in :)

Is there specific code you are referring to? 

[1] https://review.openstack.org/#/c/36140/___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Program Proposal: Trove

2013-07-16 Thread Michael Basnight
Official Title: OpenStack Database as a Service
Initial PTL: Michael Basnight mbasni...@gmail.com

Mission Statement: To provide scalable and reliable Cloud Database as a Service 
functionality for both relational and non-relational database engines, and to 
continue to improve its fully-featured and extensible open source framework.

GitHub: https://github.com/openstack/trove
LaunchPad: https://launchpad.net/Trove
Program Wiki: https://wiki.openstack.org/wiki/Trove
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cheetah vs Jinja

2013-07-16 Thread Michael Basnight
Also, jinja2 is in requirements. We have no specific requirements on a 
particular version so feel free to pin it to a specific. We (trove) use it to 
generate config templates.

https://github.com/openstack/requirements/commit/96f38365ce94d2135f7744c93bae0ce92a747195

On Jul 16, 2013, at 1:10 PM, Nachi Ueno wrote:

 Hi folks
 
 Jinja2 looks have +3.
 This is the winner?
 
 # My code can be done by Jinja2 also.
 
 so if we choose Jinja2, what's version range is needed?
 
 Thanks
 Nachi
 
 
 
 2013/7/16 Matt Dietz matt.di...@rackspace.com:
 I'll second the jinja2 recommendation. I also use it with Pyramid, and
 find it non-obtrusive to write and easy to understand.
 
 -Original Message-
 From: Sandy Walsh sandy.wa...@rackspace.com
 Reply-To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date: Tuesday, July 16, 2013 11:34 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Cheetah vs Jinja
 
 I've used jinja2 on many projects ... it's always been solid.
 
 -S
 
 
 From: Solly Ross [sr...@redhat.com]
 Sent: Tuesday, July 16, 2013 10:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja
 
 (This email is with regards to https://review.openstack.org/#/c/36316/)
 
 Hello All,
 
 I have been implementing the Guru Meditation Report blueprint
 (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report), and
 the question of a templating engine was raised.  Currently, my version of
 the code includes the Jinja2 templating engine (http://jinja.pocoo.org/),
 which is modeled after the Django templating engine (it was designed to
 be an implementation of the Django templating engine without requiring
 the use of Django), which is used in Horizon.  Apparently, the Cheetah
 templating engine (http://www.cheetahtemplate.org/) is used in a couple
 places in Nova.
 
 IMO, the Jinja template language produces much more readable templates,
 and I think is the better choice for inclusion in the Report framework.
 It also shares a common format with Django (making it slightly easier to
 write for people coming from that area), and is also similar to template
 engines for other languages. What does everyone else think?
 
 Best Regards,
 Solly Ross
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Proposal: Trove

2013-07-16 Thread Michael Basnight

On Jul 16, 2013, at 7:45 PM, Haomai Wang wrote:
 
 在 2013-7-17,上午4:42,Michael Basnight mbasni...@gmail.com 写道:
 
 On Jul 16, 2013, at 12:49 PM, Mark McLoughlin wrote:
 
 Hey,
 
 On Tue, 2013-07-16 at 10:37 -0700, Michael Basnight wrote:
 Official Title: OpenStack Database as a Service
 Initial PTL: Michael Basnight mbasni...@gmail.com
 
 Mission Statement: To provide scalable and reliable Cloud Database as
 a Service functionality for both relational and non-relational
 database engines, and to continue to improve its fully-featured and
 extensible open source framework.
 
 Seems fine to me, but I'd see adding non-relational support as an
 expansion of Trove's scope as approved by the TC.
 
 I know we discussed whether it should be in scope from the beginning,
 but I thought we didn't want rule out the possibility of an entirely new
 team of folks coming up with a NoSQL as a Service project.
 
 Cant disagree with this, because of the initial TCT ruling. FWIW we are 
 working on some NoSQL stuff _in_ trove at present. Maybe i should bring it 
 up at the next TCT meeting? Ive done a redis POC in the past and can show 
 the code for that. It was before the rename and has a small ammt of bitrot 
 but its something i can definitely show to the group.
 +1, I'm interested in Trove and I want to do some works for it. NoSQL is 
 easier to control than SQL database and I want to join it to implement 
 LevelDB.

Great! Feel free to join in. The API is extensible enough at present to allow 
you to implement LevelDB, or any other NoSQL data store. Its _almost_ as simple 
as adding a custom guest manager [1], and making sure you edit the api config 
to specify service_type=name_of_service_type. There are a few more small 
gotchas, but thats a good 80% of the implementation. You can also create 
strategies to define how you backup/restore 'service_type', but i wont get into 
that on list. We are also working on a cluster api to allow cluster 
instrumentation, which i will be sending out in the next few weeks for the 
community to scrutinize.

Find me on irc, #openstack-trove, user hub_cap for more information. 

[1] https://github.com/openstack/trove/tree/master/trove/guestagent/manager
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana-2 milestone candidates available

2013-07-17 Thread Michael Basnight
On Jul 17, 2013, at 7:40 AM, Thierry Carrez wrote:

 Milestone-proposed branches were created for Keystone, Glance, Nova,
 Horizon, Neutron, Cinder, Ceilometer and Heat in preparation for the
 havana-2 milestone publication Thursday.
 
 You can find candidate tarballs at:
 http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
 http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
 http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
 http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
 http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
 http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
 http://tarballs.openstack.org/ceilometer/ceilometer-milestone-proposed.tar.gz
 http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
 
 You can also access the milestone-proposed branches directly at:
 https://github.com/openstack/keystone/tree/milestone-proposed
 https://github.com/openstack/glance/tree/milestone-proposed
 https://github.com/openstack/nova/tree/milestone-proposed
 https://github.com/openstack/horizon/tree/milestone-proposed
 https://github.com/openstack/neutron/tree/milestone-proposed
 https://github.com/openstack/cinder/tree/milestone-proposed
 https://github.com/openstack/ceilometer/tree/milestone-proposed
 https://github.com/openstack/heat/tree/milestone-proposed

Id also like to announce our first trove database milestone-proposed target. We 
will be following the same cycle as the other projects for h2.

You can find a candidate tarball at:
http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz

You can also access the milestone-proposed branch directly at:
https://github.com/openstack/trove/tree/milestone-proposed

Thanks thierry for making it happen!!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Weekly meeting today

2013-07-17 Thread Michael Basnight
Same bat time, same bat channel. Wed, 2000UTC in #openstack-meeting-alt. We 
will be discussing clustering api if anyone is interested in following along.

https://wiki.openstack.org/wiki/Meetings/TroveMeeting
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] amending Trove incubation to support NRDB

2013-07-29 Thread Michael Basnight
Rackspace is interested in creating a redis implementation for Trove, as well 
as haomai wang is looking to leverage Trove for leveldb integration. Ive done a 
proof of concept for redis and it was ~200 lines of guest impl code and I had 
to make one small change to the core create API, but one that will be the new 
default for creating instances. It was adding a return of a root password. the 
other differences were the /users and /databases extensions for mysql were not 
working (for obvious reasons). The reason NRDB was not originally part of Trove 
was a decision that there was nothing to show that it has a valid impl without 
substantial differences to the API/core system [1]. See around 20:35:42. 
Originally we had petitioned for a RDDB / NRDB system [2].

The path for it is to basically add a redis.py to the guest impl and to 
instruct the api that redis is the default (config file). Then the 
api/functionality behaves the same. Feel free to ask questions on list before 
the tc meeting!

[1] http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-04-30-20.02.log.html
[2] https://wiki.openstack.org/wiki/ReddwarfAppliesForIncubation#Summary:
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] amending Trove incubation to support NRDB

2013-08-05 Thread Michael Basnight
As per the request from the TC, here is a work in progress review for the redis 
impl. its a POC. Plz understand that its more about seeing how this affects the 
trove codebase than scrutinizing why i did X or Y in the redis impl, or whether 
i should include config value Z. 3

https://review.openstack.org/#/c/40239/ 


On Jul 29, 2013, at 4:02 PM, Jay Pipes wrote:

 On 07/29/2013 05:12 PM, Michael Basnight wrote:
 Rackspace is interested in creating a redis implementation for Trove, as 
 well as haomai wang is looking to leverage Trove for leveldb integration. 
 Ive done a proof of concept for redis and it was ~200 lines of guest impl 
 code and I had to make one small change to the core create API, but one that 
 will be the new default for creating instances. It was adding a return of a 
 root password. the other differences were the /users and /databases 
 extensions for mysql were not working (for obvious reasons). The reason NRDB 
 was not originally part of Trove was a decision that there was nothing to 
 show that it has a valid impl without substantial differences to the 
 API/core system [1].
 
 As you allude to above, it's all about the API :) As long as the API does not 
 become either too convoluted from needing to conform to the myriad KVS/NRDB 
 standards or too generic as to be detrimental to relational databases, I'm 
 cool with it.
 
  See around 20:35:42. Originally we had petitioned for a RDDB / NRDB system 
  [2].
 
 The path for it is to basically add a redis.py to the guest impl and to 
 instruct the api that redis is the default (config file). Then the 
 api/functionality behaves the same. Feel free to ask questions on list 
 before the tc meeting!
 
 All good in my opinion.
 
 Best,
 -jay
 
 [1] 
 http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-04-30-20.02.log.html
 [2] https://wiki.openstack.org/wiki/ReddwarfAppliesForIncubation#Summary:
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-10 Thread Michael Basnight
Maybe sudo the 2nd cmd?

Sent from my digital shackles

On Aug 10, 2013, at 10:57 AM, Roman Gorodeckij ho...@holms.lt wrote:

 ok.. nothing changes..
 
 stack@hp:~/devstack$ sudo pip install -I oslo.config==1.1.1
 Downloading/unpacking oslo.config==1.1.1
 Downloading oslo.config-1.1.1.tar.gz (75kB): 75kB downloaded
 Running setup.py egg_info for package oslo.config
 
 warning: no previously-included files found matching '.gitignore'
 warning: no previously-included files found matching '.gitreview'
 Installing collected packages: oslo.config
 Found existing installation: oslo.config 1.2.0.a11.gc85c8e6.a11.gc85c8e6
 Can't uninstall 'oslo.config'. No files were found to uninstall.
 Running setup.py install for oslo.config
 Skipping installation of 
 /usr/local/lib/python2.7/dist-packages/oslo/__init__.py (namespace package)
 
 warning: no previously-included files found matching '.gitignore'
 warning: no previously-included files found matching '.gitreview'
 Installing 
 /usr/local/lib/python2.7/dist-packages/oslo.config-1.1.1-py2.7-nspkg.pth
 Successfully installed oslo.config
 Cleaning up…
 
 
 stack@hp:~/devstack$  pip uninstall oslo.config
 Can't uninstall 'oslo.config'. No files were found to uninstall.
 
 2013-08-10 12:10:55 + echo 'Waiting for nova-api to start...'
 2013-08-10 12:10:55 Waiting for nova-api to start...
 2013-08-10 12:10:55 + wait_for_service 60 http://192.168.1.6:8774
 2013-08-10 12:10:55 + local timeout=60
 2013-08-10 12:10:55 + local url=http://192.168.1.6:8774
 2013-08-10 12:10:55 + timeout 60 sh -c 'while ! http_proxy= https_proxy= curl 
 -s http://192.168.1.6:8774 /dev/null; do sleep 1; done'
 2013-08-10 12:11:55 + die 698 'nova-api did not start'
 2013-08-10 12:11:55 + local exitcode=0
 stack@hp:~/devstack$ 2013-08-10 12:11:55 + set +o xtrace
 2013-08-10 12:11:55 [ERROR] ./stack.sh:698 nova-api did not start
 
 stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-api.log
 t/stack/status/stack/n-api.failurenova  /usr/local/bin/nova-api || touch 
 /op
 Traceback (most recent call last):
 File /usr/local/bin/nova-api, line 6, in module
 from nova.cmd.api import main
 File /opt/stack/nova/nova/cmd/api.py, line 29, in module
 from nova import config
 File /opt/stack/nova/nova/config.py, line 22, in module
 from nova.openstack.common.db.sqlalchemy import session as db_session
 File /opt/stack/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
 279, in module
 deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
 AttributeError: 'module' object has no attribute 'DeprecatedOpt'
 
 
 
 On Aug 10, 2013, at 5:07 PM, Sean Dague s...@dague.net wrote:
 
 Silly pip, trix are for kids.
 
 Ok, well:
 
 sudo pip install -I oslo.config==1.1.1
 
 then pip uninstall oslo.config
 
 On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
 stack@hp:~/devstack$ sudo pip install oslo.config
 Requirement already satisfied (use --upgrade to upgrade): oslo.config in 
 /opt/stack/oslo.config
 Requirement already satisfied (use --upgrade to upgrade): six in 
 /usr/local/lib/python2.7/dist-packages (from oslo.config)
 Cleaning up...
 stack@hp:~/devstack$ sudo pip uninstall oslo.config
 Can't uninstall 'oslo.config'. No files were found to uninstall.
 stack@hp:~/devstack$
 
 stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-api.log
 | touch /opt/stack/status/stack/n-api.failurenova  
 /usr/local/bin/nova-api |
 Traceback (most recent call last):
 File /usr/local/bin/nova-api, line 6, in module
 from nova.cmd.api import main
 File /opt/stack/nova/nova/cmd/api.py, line 29, in module
 from nova import config
 File /opt/stack/nova/nova/config.py, line 22, in module
 from nova.openstack.common.db.sqlalchemy import session as db_session
 File /opt/stack/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
 279, in module
 deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
 AttributeError: 'module' object has no attribute 'DeprecatedOpt'
 
 nothing changed.
 
 On Aug 9, 2013, at 6:11 PM, Sean Dague s...@dague.net wrote:
 
 This should be addressed by the latest devstack, however because we moved 
 to oslo.config out of git, some install environments might still have 
 oslo.config 1.1.0 somewhere, that pip no longer sees (so can't uninstall)
 
 sudo pip install oslo.config
 sudo pip uninstall oslo.config
 
 rerun devstack, see if it works.
 
-Sean
 
 On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:
 Tried to install devstack to dedicated server, ip's are defined.
 
 Here's the output:
 
 13-08-09 09:06:28 ++ echo -ne '\015'
 
 2013-08-09 09:06:28 + NL=$'\r'
 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd 
 /opt/stack/nova  /'sr/local/bin/nova-api || touch 
 /opt/stack/status/stack/n-api.failure
 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
 2013-08-09 09:06:28 Waiting for nova-api to start...
 2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
 2013-08-09 09:06:28 + local timeout=60
 2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
 2013-08-09 09:06:28 + 

Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Michael Basnight
On Aug 16, 2013, at 6:52 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi all, 
 
 We (OpenStack contributors) done a really huge and great work around DB code 
 in Grizzly and Havana to unify it, put all common parts into oslo-incubator, 
 fix bugs, improve handling of sqla exceptions, provide unique keys, and to 
 use  this code in different projects instead of custom implementations. (well 
 done!)
 
 oslo-incubator db code is already used by: Nova, Neutron, Cinder, Ironic, 
 Ceilometer. 
 
 In this moment we finished work around Glance: 
 https://review.openstack.org/#/c/36207/
 
 And working around Heat and Keystone.
 
 So almost all projects use this code (or planing to use it)
 
 Probably it is the right time to start work around moving oslo.db code to 
 separated lib.
 
 We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
 
 E.g. Here are two drafts:
 1) oslo.db lib code: https://github.com/malor/oslo.db
 2) And here is this lib in action: https://review.openstack.org/#/c/42159/
 
 
 Thoughts? 
 

Excellent. Ill file a blueprint for Trove today! We need to upgrade to this. ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove]

2013-09-06 Thread Michael Basnight
On Sep 6, 2013, at 8:30 AM, Giuseppe Galeota giuseppegale...@gmail.com wrote:

 Dear all,
 I think that there is a poor documentation about Trove architecture and 
 operation.

Thanks for your interest in trove. I agree. As ptl I will devote time (now that 
the h3 madness has slowed for us) to doc'ing better information for you. 

 
 1) Can you link me a guide to the Trove architecture, in order to better 
 understand how databases instances are created by Trove's components?

Give me a bit, I'm pretty sure we had some reddwarf (pre rename) docs somewhere 
on the wiki. Ill try to find/reorg them today. 

 
 Thank you very much,
 Giuseppe
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Modify source code for Postgres engine

2013-09-09 Thread Michael Basnight

On Sep 6, 2013, at 8:19 AM, Giuseppe Galeota wrote:

 Dear all,
 this is a technical question. I would try to modify the source code of Trove 
 in order to create databases instances using Postgres engine. I think that it 
 is necessary to modify the create method in the InstanceController class. 
 Is it right? What other things should I modify?

Hi Giuseppe,

When I proposed trove to be a openstack incubated project, i created a postgres 
work in progress. [1] It may help answer some of your questions. Many things 
have changed since then ( i think its been 3+ mo now), but it will show you 
where to start.

[1] https://review.openstack.org/#/c/28328/


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Michael Basnight
On Sep 11, 2013, at 8:42 AM, Andrei Savu wrote:

 +1 
 
 I guess this will also clarify how Savanna relates to other projects like 
 OpenStack Trove. 

Yes the conversations around Trove+Savanna will be fun at the summit! I see 
overlap between our missions ;)

 
 -- Andrei Savu
 
 On Wed, Sep 11, 2013 at 5:16 PM, Mike Spreitzer mspre...@us.ibm.com wrote:
  To provide a simple, reliable and repeatable mechanism by which to 
  deploy Hadoop and related Big Data projects, including management, 
  monitoring and processing mechanisms driving further adoption of OpenStack. 
 
 That sounds like it is at about the right level of specificity. 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-12 Thread Michael Basnight
On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:

 Sergey Lukjanov wrote:
 
 [...]
 As you can see, resources provisioning is just one of the features and the 
 implementation details are not critical for overall architecture. It 
 performs only the first step of the cluster setup. We’ve been considering 
 Heat for a while, but ended up direct API calls in favor of speed and 
 simplicity. Going forward Heat integration will be done by implementing 
 extension mechanism [3] and [4] as part of Icehouse release.
 
 The next part, Hadoop cluster configuration, already extensible and we have 
 several plugins - Vanilla, Hortonworks Data Platform and Cloudera plugin 
 started too. This allow to unify management of different Hadoop 
 distributions under single control plane. The plugins are responsible for 
 correct Hadoop ecosystem configuration at already provisioned resources and 
 use different Hadoop management tools like Ambari to setup and configure all 
 cluster  services, so, there are no actual provisioning configs on Savanna 
 side in this case. Savanna and its plugins encapsulate the knowledge of 
 Hadoop internals and default configuration for Hadoop services.
 
 My main gripe with Savanna is that it combines (in its upcoming release)
 what sounds like to me two very different services: Hadoop cluster
 provisioning service (like what Trove does for databases) and a
 MapReduce+ data API service (like what Marconi does for queues).
 
 Making it part of the same project (rather than two separate projects,
 potentially sharing the same program) make discussions about shifting
 some of its clustering ability to another library/project more complex
 than they should be (see below).
 
 Could you explain the benefit of having them within the same service,
 rather than two services with one consuming the other ?

And for the record, i dont think that Trove is the perfect fit for it today. We 
are still working on a clustering API. But when we create it, i would love the 
Savanna team's input, so we can try to make a pluggable API thats usable for 
people who want MySQL or Cassandra or even Hadoop. Im less a fan of a 
clustering library, because in the end, we will both have API calls like POST 
/clusters, GET /clusters, and there will be API duplication between the 
projects.

 
 The next topic is “Cluster API”.
 
 The concern that was raised is how to extract general clustering 
 functionality to the common library. Cluster provisioning and management 
 topic currently relevant for a number of projects within OpenStack 
 ecosystem: Savanna, Trove, TripleO, Heat, Taskflow.
 
 Still each of the projects has their own understanding of what the cluster 
 provisioning is. The idea of extracting common functionality sounds 
 reasonable, but details still need to be worked out. 
 
 I’ll try to highlight Savanna team current perspective on this question. 
 Notion of “Cluster management” in my perspective has several levels:
 1. Resources provisioning and configuration (like instances, networks, 
 storages). Heat is the main tool with possibly additional support from 
 underlying services. For example, instance grouping API extension [5] in 
 Nova would be very useful. 
 2. Distributed communication/task execution. There is a project in OpenStack 
 ecosystem with the mission to provide a framework for distributed task 
 execution - TaskFlow [6]. It’s been started quite recently. In Savanna we 
 are really looking forward to use more and more of its functionality in I 
 and J cycles as TaskFlow itself getting more mature.
 3. Higher level clustering - management of the actual services working on 
 top of the infrastructure. For example, in Savanna configuring HDFS data 
 nodes or in Trove setting up MySQL cluster with Percona or Galera. This 
 operations are typical very specific for the project domain. As for Savanna 
 specifically, we use lots of benefits of Hadoop internals knowledge to 
 deploy and configure it properly.
 
 Overall conclusion it seems to be that it make sense to enhance Heat 
 capabilities and invest in Taskflow development, leaving domain-specific 
 operations to the individual projects.
 
 The thing we'd need to clarify (and the incubation period would be used
 to achieve that) is how to reuse as much as possible between the various
 cluster provisioning projects (Trove, the cluster side of Savanna, and
 possibly future projects). Solution can be to create a library used by
 Trove and Savanna, to extend Heat, to make Trove the clustering thing
 beyond just databases...
 
 One way of making sure smart and non-partisan decisions are taken in
 that area would be to make Trove and Savanna part of the same program,
 or make the clustering part of Savanna part of the same program as
 Trove, while the data API part of Savanna could live separately (hence
 my question about two different projects vs. one project above).

Trove is not, nor will be, a data API. Id like to keep Savanna in its own 

Re: [openstack-dev] Cookiecutter repo for ease in making new projects

2013-09-12 Thread Michael Basnight


Sent from my digital shackles

On Sep 12, 2013, at 10:20 PM, John Griffith john.griff...@solidfire.com wrote:

 
 
 
 On Thu, Sep 12, 2013 at 11:08 PM, Monty Taylor mord...@inaugust.com wrote:
 Hey everybody!
 
 You know how, when you want to make a new project, you basically take an
 existing one, like nova, copy files, and then start deleting? Nobody
 likes that.
 
 Recently, cookiecutter came to my attention, so we put together a
 cookiecutter repo for openstack projects to make creating a new one easier:
 
 https://git.openstack.org/cgit/openstack-dev/cookiecutter
 
 It's pretty easy to use. First, install cookiecutter:
 
 sudo pip install cookiecutter
 
 Next, tell cookiecutter you'd like to create a new project based on the
 openstack template:
 
 cookiecutter git://git.openstack.org/openstack-dev/cookiecutter.git
 
 Cookiecutter will then ask you three questions:
 
 a) What repo groups should it go in? (eg. openstack, openstack-infra,
 stackforge)
 b) What is the name of the repo? (eg. mynewproject)
 c) What is the project's short description? (eg. OpenStack Wordpress as
 a Service)
 
 And boom, you'll have a directory all set up with your new project ready
 and waiting for a git init ; git add . ; git commit
 
 Hope this helps folks out - and we'll try to keep it up to date with
 things that become best practices - patches welcome on that front.
 
 Monty
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Nice!!  Just took it for a spin, worked great!
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-13 Thread Michael Basnight
On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
 On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com wrote:
 On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
 
  Sergey Lukjanov wrote:
 
  [...]
  As you can see, resources provisioning is just one of the features and the 
  implementation details are not critical for overall architecture. It 
  performs only the first step of the cluster setup. We’ve been considering 
  Heat for a while, but ended up direct API calls in favor of speed and 
  simplicity. Going forward Heat integration will be done by implementing 
  extension mechanism [3] and [4] as part of Icehouse release.
 
  The next part, Hadoop cluster configuration, already extensible and we 
  have several plugins - Vanilla, Hortonworks Data Platform and Cloudera 
  plugin started too. This allow to unify management of different Hadoop 
  distributions under single control plane. The plugins are responsible for 
  correct Hadoop ecosystem configuration at already provisioned resources 
  and use different Hadoop management tools like Ambari to setup and 
  configure all cluster  services, so, there are no actual provisioning 
  configs on Savanna side in this case. Savanna and its plugins encapsulate 
  the knowledge of Hadoop internals and default configuration for Hadoop 
  services.
 
  My main gripe with Savanna is that it combines (in its upcoming release)
  what sounds like to me two very different services: Hadoop cluster
  provisioning service (like what Trove does for databases) and a
  MapReduce+ data API service (like what Marconi does for queues).
 
  Making it part of the same project (rather than two separate projects,
  potentially sharing the same program) make discussions about shifting
  some of its clustering ability to another library/project more complex
  than they should be (see below).
 
  Could you explain the benefit of having them within the same service,
  rather than two services with one consuming the other ?
 
 And for the record, i dont think that Trove is the perfect fit for it today. 
 We are still working on a clustering API. But when we create it, i would love 
 the Savanna team's input, so we can try to make a pluggable API thats usable 
 for people who want MySQL or Cassandra or even Hadoop. Im less a fan of a 
 clustering library, because in the end, we will both have API calls like POST 
 /clusters, GET /clusters, and there will be API duplication between the 
 projects.
 
 I think that Cluster API (if it would be created) will be helpful not only 
 for Trove and Savanna.  NoSQL, RDBMS and Hadoop are not unique software which 
 can be clustered. What about different kind of messaging solutions like 
 RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic and WebSphere, 
 which often are installed in clustered mode. Messaging, databases, J2EE 
 containers and Hadoop have their own management cycle. It will be confusing 
 to make Cluster API a part of Trove which has different mission - database 
 management and provisioning.

Are you suggesting a 3rd program, cluster as a service? Trove is trying to 
target a generic enough™ API to tackle different technologies with plugins or 
some sort of extensions. This will include a scheduler to determine rack 
awareness. Even if we decide that both Savanna and Trove need their own API for 
building clusters, I still want to understand what makes the Savanna API and 
implementation different, and how Trove can build an API/system that can 
encompass multiple datastore technologies. So regardless of how this shakes 
out, I would urge you to go to the Trove clustering summit session [1] so we 
can share ideas.

[1] http://summit.openstack.org/cfp/details/54


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-13 Thread Michael Basnight

On Sep 13, 2013, at 9:05 AM, Alexander Kuznetsov wrote:

 
 
 
 On Fri, Sep 13, 2013 at 7:26 PM, Michael Basnight mbasni...@gmail.com wrote:
 On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
  On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com 
  wrote:
  On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
 
   Sergey Lukjanov wrote:
  
   [...]
   As you can see, resources provisioning is just one of the features and 
   the implementation details are not critical for overall architecture. It 
   performs only the first step of the cluster setup. We’ve been 
   considering Heat for a while, but ended up direct API calls in favor of 
   speed and simplicity. Going forward Heat integration will be done by 
   implementing extension mechanism [3] and [4] as part of Icehouse release.
  
   The next part, Hadoop cluster configuration, already extensible and we 
   have several plugins - Vanilla, Hortonworks Data Platform and Cloudera 
   plugin started too. This allow to unify management of different Hadoop 
   distributions under single control plane. The plugins are responsible 
   for correct Hadoop ecosystem configuration at already provisioned 
   resources and use different Hadoop management tools like Ambari to setup 
   and configure all cluster  services, so, there are no actual 
   provisioning configs on Savanna side in this case. Savanna and its 
   plugins encapsulate the knowledge of Hadoop internals and default 
   configuration for Hadoop services.
  
   My main gripe with Savanna is that it combines (in its upcoming release)
   what sounds like to me two very different services: Hadoop cluster
   provisioning service (like what Trove does for databases) and a
   MapReduce+ data API service (like what Marconi does for queues).
  
   Making it part of the same project (rather than two separate projects,
   potentially sharing the same program) make discussions about shifting
   some of its clustering ability to another library/project more complex
   than they should be (see below).
  
   Could you explain the benefit of having them within the same service,
   rather than two services with one consuming the other ?
 
  And for the record, i dont think that Trove is the perfect fit for it 
  today. We are still working on a clustering API. But when we create it, i 
  would love the Savanna team's input, so we can try to make a pluggable API 
  thats usable for people who want MySQL or Cassandra or even Hadoop. Im less 
  a fan of a clustering library, because in the end, we will both have API 
  calls like POST /clusters, GET /clusters, and there will be API duplication 
  between the projects.
 
  I think that Cluster API (if it would be created) will be helpful not only 
  for Trove and Savanna.  NoSQL, RDBMS and Hadoop are not unique software 
  which can be clustered. What about different kind of messaging solutions 
  like RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic and 
  WebSphere, which often are installed in clustered mode. Messaging, 
  databases, J2EE containers and Hadoop have their own management cycle. It 
  will be confusing to make Cluster API a part of Trove which has different 
  mission - database management and provisioning.
 
 Are you suggesting a 3rd program, cluster as a service? Trove is trying to 
 target a generic enough™ API to tackle different technologies with plugins or 
 some sort of extensions. This will include a scheduler to determine rack 
 awareness. Even if we decide that both Savanna and Trove need their own API 
 for building clusters, I still want to understand what makes the Savanna API 
 and implementation different, and how Trove can build an API/system that can 
 encompass multiple datastore technologies. So regardless of how this shakes 
 out, I would urge you to go to the Trove clustering summit session [1] so we 
 can share ideas.
 
 Generic enough™ API shouldn't contain a database specific calls like backups 
 and restore (already in Trove).  Why we need a backup and restore operations 
 for J2EE or messaging solutions? 

I dont mean to encompass J2EE or messaging solutions. Let me amend my email to 
say to tackle different datastore technologies. But going with this point… Do 
you not need to backup things in a J2EE container? Id assume a backup is needed 
by all clusters, personally. I would not like a system that didnt have a way to 
backup and restore things in my cluster.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Michael Basnight

On Sep 16, 2013, at 8:42 AM, Matthias Runge wrote:

 On 16/09/13 17:36, Michael Basnight wrote:
 
 
 Not to forget python-troveclient, which is currently a hard
 requirement for Horizon.
 
 During the review for python-troveclient, it was discovered,
 troveclient still references reddwarfclient (in docs/source).
 
 Are you saying it references another codebase? Or just that when we
 renamed it we forgot to update a reference or two? If its the latter,
 is it relevant to this requirements issue? Also, I will gladly fix it
 and release it if its making anyone's life hell :)
 
 
 In my understanding, this is just due forgotten references during the
 rename, it's not relevant to the requirements issue.
 
 Currently, just the docs refer to reddwarf, resulting in build issues
 when building docs.
 

Whew! Ill fix it anyway. Thx fro pointing it out.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Michael Basnight
On Sep 16, 2013, at 12:24 AM, Matthias Runge mru...@redhat.com wrote:

 On 16/09/13 05:30, Monty Taylor wrote:
 
 
 On 09/15/2013 01:47 PM, Alex Gaynor wrote:
 Falcon was included as a result of Marconi moving from stackforge to
 being incubated. sphinxcontrib-programoutput doesn't appear to have been
 added at all, it's still under
 review: https://review.openstack.org/#/c/46325/
 
 I agree with Alex and Morgan. falcon was the marconi thing.
 diskimage-builder and tripleo-image-elements are part of an OpenStack
 program.
 
 sphinxcontrib-programoutput is only a build depend for docs - but I
 think you're made a good point, and we should be in requirements freeze.
 Let's hold off on that one until icehouse opens.
 
 
 Not to forget python-troveclient, which is currently a hard requirement
 for Horizon.
 
 During the review for python-troveclient, it was discovered, troveclient
 still references reddwarfclient (in docs/source).

Are you saying it references another codebase? Or just that when we renamed it 
we forgot to update a reference or two? If its the latter, is it relevant to 
this requirements issue? Also, I will gladly fix it and release it if its 
making anyone's life hell :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Michael Basnight

On Sep 16, 2013, at 9:05 AM, Matthias Runge wrote:

 Signed PGP part
 On 16/09/13 17:51, Michael Basnight wrote:
 
  Currently, just the docs refer to reddwarf, resulting in build
  issues when building docs.
  
  
  Whew! Ill fix it anyway. Thx fro pointing it out.
  
 Awesome, Michael. Very much appreciated!

https://review.openstack.org/#/c/46755/


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-19 Thread Michael Basnight
 On Sep 18, 2013, at 4:53 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 
 Hi folks,
 
 I have few comments on Hadoop cluster provisioning in Savanna.
 
 Now Savanna provisions instances, install management console (like Apache 
 Ambari) on one them and communicate with it using REST API of the installed 
 console to prepare and run all requested services at all instances. So, the 
 only provisioning that we're doing in Savanna is the instance, volumes 
 creation and their initial configuration like /etc/hosts generation for all 
 instances. The most part of these operations or even all of them should be 
 eventually removed by Heat integration during the potential incubation in 
 Icehouse cycle, so, after it we'll be concentrated at EDP (Elastic Data 
 Processing) operations.
 
 I was surprised how much time was spent on clustering discussion at the last 
 TC meeting and that there was a small amount of other questions. So, I think 
 that it'll be better to separate clustering discussion that is a long-term 
 activity with plans to be discussed during the design summit and Savanna 
 incubation request that should be finally discussed at the next TC meeting. 
 Of course, I think that it's a right way for Savanna to participate 
 clustering discussions. From our perspective, clustering should be 
 implemented as additional functionality in underlying services like Nova, 
 Cinder, Heat and libraries - Oslo, Taskflow, that will help projects like 
 Savanna, Trove and etc. to provisioning resources for clusters, scale and 
 terminate them. So, our role in it is to collaborate on such features 
 implementation. One more interesting idea - clustering API standardization, 
 it sounds interesting, but it looks like that such APIs could be very 
 different, for example, our current working API [0] and Trove's draft for 
 Cluster API [1].

Draft APIs are subject to change :) until we put the code in place I would be 
ok modifying the API. +1 to working together at the summit to bring the API 
differences together. We have a trove clustering session and I'd LOVE to have 
savanna folk at it. Lets unify ideas!!

 
 I also would like to ensure that Savanna team is 100% behind the idea of 
 doing full integration with all applicable OpenStack projects during 
 incubation.
 
 Thanks.
 
 [0] 
 https://savanna.readthedocs.org/en/latest/userdoc/rest_api_v1.0.html#node-group-templates
 [1] https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API
 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.
 
 On Sep 13, 2013, at 22:35, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Michael Basnight's message of 2013-09-13 08:26:07 -0700:
 On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
 On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com 
 wrote:
 On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
 
 Sergey Lukjanov wrote:
 
 [...]
 As you can see, resources provisioning is just one of the features and 
 the implementation details are not critical for overall architecture. It 
 performs only the first step of the cluster setup. We’ve been 
 considering Heat for a while, but ended up direct API calls in favor of 
 speed and simplicity. Going forward Heat integration will be done by 
 implementing extension mechanism [3] and [4] as part of Icehouse release.
 
 The next part, Hadoop cluster configuration, already extensible and we 
 have several plugins - Vanilla, Hortonworks Data Platform and Cloudera 
 plugin started too. This allow to unify management of different Hadoop 
 distributions under single control plane. The plugins are responsible 
 for correct Hadoop ecosystem configuration at already provisioned 
 resources and use different Hadoop management tools like Ambari to setup 
 and configure all cluster  services, so, there are no actual 
 provisioning configs on Savanna side in this case. Savanna and its 
 plugins encapsulate the knowledge of Hadoop internals and default 
 configuration for Hadoop services.
 
 My main gripe with Savanna is that it combines (in its upcoming release)
 what sounds like to me two very different services: Hadoop cluster
 provisioning service (like what Trove does for databases) and a
 MapReduce+ data API service (like what Marconi does for queues).
 
 Making it part of the same project (rather than two separate projects,
 potentially sharing the same program) make discussions about shifting
 some of its clustering ability to another library/project more complex
 than they should be (see below).
 
 Could you explain the benefit of having them within the same service,
 rather than two services with one consuming the other ?
 
 And for the record, i dont think that Trove is the perfect fit for it 
 today. We are still working on a clustering API. But when we create it, i 
 would love the Savanna team's input, so we can try to make a pluggable API 
 thats usable for people who want MySQL or Cassandra or even Hadoop. Im 
 less a fan

Re: [openstack-dev] On the usage of pip vs. setup.py install

2013-09-23 Thread Michael Basnight
But I got suddenly full. Interesting thing that is. 

Sent from my digital shackles

 On Sep 23, 2013, at 7:16 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 
 I ran that but world peace didn't happen.
 
 Where can I get my refund?
 
 Sent from my really tiny device...
 
 On Sep 23, 2013, at 6:47 PM, Monty Taylor mord...@inaugust.com wrote:
 
 tl;dr - easy_install sucks, so use pip
 
 It is common practice in python to run:
 
 python setup.py install
 or
 python setup.py develop
 
 So much so that we spend a giant amount of effort to make sure that
 those always work.
 
 Fortunately for us, the underlying mechanism, setuptools, can often be a
 pile of monkies. pip, while also with its fair share of issues, _is_ a
 bit better at navigating the shallow waters at times. SO - I'd like to
 suggest:
 
 Instead of:
 
 python setup.py install
 
 Run:
 
 pip install .
 
 It should have the exact same result, but pip can succeed in some places
 where setup.py install directly can fail.
 
 Also, if you'd like to run python setup.py develop, simply run:
 
 pip install -e .
 
 Which you may not have known will run setup.py develop behind the scenes.
 
 Things this will help with:
 - world peace
 - requirements processing
 - global hunger
 - the plague
 
 Enjoy.
 
 Monty
 
 PS. The other should work. It's just sometimes it doesn't, and when it
 doesn't it's less my fault.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] MySQL HA BP

2013-09-26 Thread Michael Basnight
On Sep 26, 2013, at 6:07 AM, Ilya Sviridov wrote:

 The blueprint https://blueprints.launchpad.net/trove/+spec/mysql-ha
 
 In order to become production ready DBaaS, Trove should provide ability to 
 deploy and manage high available database.
 
 There are several approaches to achive HA in MySQL: driven by high 
 availability resource managers like Peacemaker [1] ,master-master 
 replication, Percona XTraDB Cluster [2] based on Galera library [3] so on.
 
 But, as far as Trove DB instances are running in cloud environment, general 
 approach can be not always the best suitable option and should be discussed
 
 --
 [1] http://clusterlabs.org/
 [2] http://www.percona.com/software/percona-xtradb-cluster
 [3] https://launchpad.net/galera

This, to me, is a perfect fit for our (work in progress) clustering API. The 
Codership (galera) team and the Continuent (tungsten) team have both expressed 
interest in helping to guide the Trove team toward building an awesome 
clustering product. I think till be up to the people who want to contribute to 
define what MySQL clustering impl will be the best in Trove. Each of the 
clustering products has pros/cons, so its hard to say Trove will support only X 
for clustering. Im personally a fan of both of the aforementioned clustering 
product because they are different. Galera is synchronous multi master, and 
tungsten is async master/slave. Our first implementation is, of course, basic 
master/slave builtin replication, and it too solves a problem. 

So to answer your question, i think we can build a MySQL cluster using 
different technologies and let operators choose what they want to support. And 
better yet, if operators choose to support  1 cluster type, we can let 
customers choose what they feel is the right choice for their data.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-09-26 Thread Michael Basnight
On Sep 25, 2013, at 7:16 PM, Craig Vyvial wrote:

 So we have a blueprint for this and there are a couple things to point out 
 that have changed since the inception of this BP.
 
 https://blueprints.launchpad.net/trove/+spec/configuration-management
 
 This is an overview of the API calls for 
 
 POST /configurations - create config
 GET  /configurations - list all configs
 PUT  /configurations/{id} - update all the parameters
 
 GET  /configurations/{id} - get details on a single config
 GET  /configurations/{id}/{key} - get single parameter value that was set for 
 the configuration
 
 PUT  /configurations/{id}/{key} - update/insert a single parameter
 DELETE  /configurations/{id}/{key} - delete a single parameter
 
 GET  /configurations/{id}/instances - list of instances the config is 
 assigned to
 GET  /configurations/parameters - list of all configuration parameters
 
 GET  /configurations/parameters/{key} - get details on a configuration 
 parameter
 
 There has been talk about using PATCH http action instead of PUT action for 
 thie update of individual parameter(s).
 
 PUT  /configurations/{id}/{key} - update/insert a single parameter
 and/or
 PATCH  /configurations/{id} - update/insert parameter(s)
 
 
 I am not sold on the idea of using PATCH unless its widely used in other 
 projects across Openstack. What does everyone think about this?
 
 If there are any concerns around this please let me know.

Im a fan of PATCH. Id rather have a different verb on the same resource than 
creating a new sub-resource just to do the job of what PATCH defines. Im not 
sure the [1] gives us any value, and i think its only around because of [2]. I 
can see PATCH removing the need for [1], simplifying the API. And of course 
removing the need for [2] since it _is_ the updating of a single kv pair. And i 
know keystone and glance use PATCH for updates in their API as well. 

[1]  GET /configurations/{id}/{key} 
[2] PUT  /configurations/{id}/{key} 


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove is already a source package in Debian and a Python module in PyPi

2013-10-01 Thread Michael Basnight

On Oct 1, 2013, at 6:45 AM, Thomas Goirand wrote:

 Hi,
 
 I have just found out that there's already a source package called
 trove in Debian:
 
 http://packages.debian.org/source/sid/trove
 
 Lucky, I don't think it will clash with OpenStack trove since it
 produces libtrove-java* .deb files, though that's quite annoying. The
 source package for OpenStack Trove will have to be called something like
 openstack-trove, which isn't nice.
 
 Also, in PyPi, there is:
 https://pypi.python.org/pypi/TroveClient
 
 which would clash with python-troveclient. This pypi module was uploaded
 more than a year ago. Since I have already uploaded python-troveclient
 (currently waiting in the FTP master's NEW queue), OpenStack troveclient
 will be in Sid, but if some day, someone wants to upload TroveClient
 from http://dev.yourtrove.com, then we have a problem.
 
 So, I am really questioning the choice of Trove as a name for the
 DBaaS in OpenStack. I think it is a pretty bad move. :(
 
 Is it too late to fix this?

Well this sucks. Im not sure im a fan of renaming it because of the previous 
existence of a package. Renaming is not fun. Ill let the more experienced 
openstack peoples help decide on this...



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove is already a source package in Debian and a Python module in PyPi

2013-10-01 Thread Michael Basnight

On Oct 1, 2013, at 8:00 AM, Nicholas Chase wrote:

 
 
 On 10/1/2013 10:53 AM, Michael Basnight wrote:
 
 Well this sucks. Im not sure im a fan of renaming it because of the previous 
 existence of a package. Renaming is not fun. Ill let the more experienced 
 openstack peoples help decide on this...
 
 Certainly it's not fun, but wouldn't it be easier to do it NOW, rather than 
 later?

Hah, well, it begs the question, why are we _not_ namespacing all openstack 
packages. If you look @ the rpms, they are all namespaced [1]. IMHO, i wouldnt 
mind seeing openstack-blah packages. And let me reiterate, im not sure changing 
our name _again_ is the way to go because of a conflicting debian package. This 
wont be a problem for the rpms, because they are namespaced (openstack-trove).

[1] https://admin.fedoraproject.org/pkgdb/acls/name/openstack-nova


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove is already a source package in Debian and a Python module in PyPi

2013-10-01 Thread Michael Basnight
Thx for getting this squared away zigo!!

On Oct 1, 2013, at 10:40 AM, Thomas Goirand wrote:

 Hi,
 
 First of all, before I reply, I'd like to tell that hopefully, I believe
 we're fine in this specific case (at least on the Debian side of
 things). Though it really shouldn't have happen, and I believe it's my
 role to warn everybody about the risks.
 
 On 10/02/2013 12:02 AM, Monty Taylor wrote:
 
 
 On 10/01/2013 11:45 AM, Thomas Goirand wrote:
 On 10/01/2013 10:02 PM, Jonathan Dowland wrote:
 On Tue, Oct 01, 2013 at 09:45:17PM +0800, Thomas Goirand wrote:
 Since I have already uploaded python-troveclient (currently waiting in
 the FTP master's NEW queue), OpenStack troveclient will be in Sid, but
 if some day, someone wants to upload TroveClient from
 http://dev.yourtrove.com, then we have a problem.
 …
 Is it too late to fix this?
 
 I don't think it's a problem. Two reasons:
 
 a) libtrove-java having its source package named trove is a bug. the
 upstream project is called trove4j, that's what their source package
 should have been called. We can't be held accountable for that.
 
 The source package name isn't a problem, it can be anything. I think I
 will use openstack-trove. By the way, it's only trove4j because there
 was trove3 before, if I understand well (I get that doing apt-file
 search trove on my Sid box).
 
 b) We got to python-troveclient first. We win. (sorry that's rude, but
 we _did_ get into the queue first. That's the reason that pip is
 python-pip in debian, because there is a pip tool in perl. TroveClient
 released last january and dev.yourtrove.com is unresponsive.
 
 Yes, that might be right for the Debian package. Though at PyPi, there's
 TroveClient already. Or is it that PyPi is case sensitive, and then we
 don't have a problem here?
 
 I DO agree that we need to be careful with them, and I think that a fair
 list of things to check when looking for a name are:
 
 PyPI
 Launchpad
 debian
 fedora
 
 Please also add Gentoo to the list (since there is also an ongoing
 effort to package OpenStack there as well).
 
 We might need some follow up from fedora and debian folks about HOW
 people should search for names and what things should be considered in
 conflict.
 
 For Debian:
 
 1/ apt-cache search PACKAGE-NAME
 2/
 http://packages.debian.org/search?keywords=PACKAGE-NAMEsearchon=namessuite=allsection=all
 3/ apt-file search PACKAGE-NAME
 4/ http://www.debian.org/devel/wnpp/being_packaged
 5/ http://www.debian.org/devel/wnpp/requested
 
 This might not be an exhaustive list of checks.
 
 And obviously, your favorite search engine might help (avoiding
 trademark, as everyone know, is also very important). In this case, it
 shows a few results that aren't engaging, and ... nothing about
 OpenStack trove.
 
 Also, while using apt-file search, make sure that no file in /usr/bin is
 in conflict. Just to let everyone understand, allow me to remind
 everyone the NodeJS which used /usr/bin/node, while an amateur radio
 AX25 server node source package used /usr/sbin/node. Then we had this
 decision from the technical committee:
 
 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=614907#113
 
 As a result, since NodeJS never migrated to Testing before the freeze
 date (and that is a requirement), we released Wheezy without NodeJS.
 
 This kind of disaster scenario should be avoided at all costs. Note that
 within Debian, there's no this package is more important than this one
 kind of things, so I don't believe that OpenStack would have any
 priority in this kind of case, so it is very important to get things right.
 
 Remember that most of our 1200 devs do not have any idea how
 debian packaging works. Those of us who _do_ need to give very short and
 succinct guidelines, such as:
 
 The package name should not conflict with source or binary package names
 in Debian. You can search for those by ...
 
 Well, not only the binary package names (for the source package, it
 maters a lot less, we can use anything). What's even more important is
 that we should never have *installed files* collision. Declaring a
 Breaks: or a Conflicts: doesn't cut it, unless we have 2 implementations
 of the same things, with the same functionality (for example mawk vs
 gawk, in which case we can use update-alternatives (see manpage)). For
 example, if one day someone wants to package the TroveClient from
 dev.yourtrove.com, we have such unsolvable problem of file collision (in
 the Python dist-packages folder).
 
 However, again, in this case I think it's fine, and I do not think we
 need to rename trove beceause there happens to be a package called trove4j.
 
 I just hope so as well.
 
 Thomas Goirand (zigo)
 
 P.S: I will use openstack-trove as source package.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using 

Re: [openstack-dev] [TROVE] Thoughts on DNS refactoring, Designate integration.

2013-10-01 Thread Michael Basnight
 On Oct 1, 2013, at 3:06 PM, Ilya Sviridov isviri...@mirantis.com wrote:
 
 
 On Tue, Oct 1, 2013 at 6:45 PM, Tim Simpson tim.simp...@rackspace.com 
 wrote:
 Hi fellow Trove devs,
 
 With the Designate project ramping up, its time to refactor the ancient DNS 
 code that's in Trove to work with Designate.
 
 The good news is since the beginning, it has been possible to add new 
 drivers for DNS in order to use different services. Right now we only have a 
 driver for the Rackspace DNS API, but it should be possible to write one for 
 Designate as well.
 
 How it corelates with Trove dirrection to use HEAT for all provisioning and 
 managing cloud resources? 
 There are BPs for Designate resource 
 (https://blueprints.launchpad.net/heat/+spec/designate-resource) and 
 Rackspace DNS (https://blueprints.launchpad.net/heat/+spec/rax-dns-resource) 
 as well and it looks logically to use the HEAT for that.
 
 Currently Trove has logic for provisioning instances, dns driver, creation of 
 security group, but with switching to HEAT way, we have duplication of the 
 same functionality we have to support. 

+1 to using heat for this. However, as people are working on heat support right 
now to make it more sound, if there is a group that wants/needs DNS refactoring 
now, I'd say lets add it in. If no one is in need of changing what's existing 
until we get better heat support, then we should just abandon the review and 
leave the existing DNS code as is. 

I would prefer, if there is no one in need, to abandon the exiting review and 
add it to heat support. 

  
 
 However, there's a bigger topic here. In a gist sent to me recently by 
 Dennis M. with his thoughts on how this work should proceed, he included the 
 comment that Trove should *only* support Designate: 
 https://gist.github.com/crazymac/6705456/raw/2a16c7a249e73b3e42d98f5319db167f8d09abe0/gistfile1.txt
 
 I disagree. I have been waiting for a canonical DNS solution such as 
 Designate to enter the Openstack umbrella for years now, and am looking 
 forward to having Trove consume it. However, changing all the code so that 
 nothing else works is premature.
 
 All non mainstream resources like cloud provider specific can be implemented 
 as HEAT plugins (https://wiki.openstack.org/wiki/Heat/Plugins)
  
 
 Instead, let's start work to play well with Designate now, using the open 
 interface that has always existed. In the future after Designate enters 
 integration status we can then make the code closed and only support 
 Designate.
 
 Do we really need playing with Designate and then replace it? I expect 
 designate resource will come together with designate or even earlier.
 
 With best regards,
 Ilya Sviridov
 
 
 Denis also had some other comments about the DNS code, such as not passing a 
 single object as a parameter because it could be None. I think this is in 
 reference to passing around a DNS entry which gets formed by the DNS 
 instance entry factory. I see how someone might think this is brittle, but 
 in actuality it has worked for several years so if anything changing it 
 would introduce bugs. The interface was also written to use a full object in 
 order to be flexible; a full object should make it easier to work with 
 different types of DnsDriver implementations, as well as allowing more 
 options to be set from the DnsInstanceEntryFactory. This later class creates 
 a DnsEntry from an instance_id. It is possible that two deployments of 
 Trove, even if they are using Designate, might opt for different 
 DnsInstanceEntryFactory implementations in order to give the DNS entries 
 associated to databases different patterns. If the DNS entry is created at 
 this point its easier to further customize and tailor it. This will hold 
 true even when Designate is ready to become the only DNS option we support 
 (if such a thing is desirable).
 
 Thanks,
 
 Tim
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-10-01 Thread Michael Basnight
On Oct 1, 2013, at 7:19 PM, Vipul Sabhaya wrote:
 
 So from this API, I see that a configuration is a standalone resource that 
 could be applied to N number of instances.  It's not clear to me what the API 
 is for 'applying' a configuration to an existing instance.  

https://wiki.openstack.org/wiki/Trove/Configurations#Update_an_Instance_.28PUT.29

 Also if we change a single item in a given configuration, does that change 
 propagate to all instances that configuration belongs to? 

Thats a good question. Maybe PATCH'ing a configuration is not a good thing. We 
either have 1) drift between an attached config on an instance vs the template, 
or 2) a confusing situation where we are potentially updating configurations on 
running instances. Another possibility is that a PATCH would in effect, clone 
the existing template, if its in use, giving it a new UUID with the updated 
parameters. But im not sure i like that approach either… Its really not a PATCH 
at that point anyway id think.

Blueprint designers, im looking to you for clarity.

 What about making 'configuration' a sub-resource of /instances?  
 
 Unless we think configurations will be common amongst instances for a given 
 tenant, it may not make sense to make them high level resources. 

As in /instances/configurations, or /instances/{id}/configurations ? I do see 
commonality between a configuration and a bunch of instances, such that a 
configuration is not unique to a single instance. I can see a reseller having a 
tweaked out works with _insert your favorite CMS here_ template applied to 
all the instances they provision.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-10-02 Thread Michael Basnight
On Oct 2, 2013, at 9:21 AM, Craig Vyvial wrote:

 If a configuration is updated that is attached to N instances then those 
 instances will be updated with the configuration overrides. This will keep 
 the configuration n-sync[hah 90s boy band reference] with instances that have 
 it attached. I'm not sure that this is really a confusing situation because 
 you are updating the configuration key/values. This will not update the UUID 
 of the configuration because we are not trying to make the changes like a 
 sub-versioned system. 

ok if thats the expected behavior, im ok with that. ByeByeBye with the other 
options ;)

 
 'configuration' is a resource that can be applied only to instances. Making 
 it a sub-resource of '/instances' is an option but does that warrant it 
 always being tied to an instance?

No.

 
 Each configuration is unique to a tenant and therefore doesnt allow a 
 reseller to create a tweaked out config. I see value in allowing resellers to 
 do this but currently they can update the templates that are used in the 
 system.

I mean a single tenant being that reseller. He has 1 template he applies to all 
his db instances for his customers, which he supports outside of trove.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-10-02 Thread Michael Basnight
On Oct 1, 2013, at 11:20 PM, McReynolds, Auston wrote:

 I have a few questions left unanswered by the blueprint/wiki:
 
 #1 - Should the true default configuration-group for a service-type be
customizable by the cloud provider?

Yes

 
 #2 - Should a user be able to enumerate the entire actualized/realized
set of values for a configuration-group, or just the overrides?

actualized

 
 #3 - Should a user be able to apply a different configuration-group on
a master, than say, a slave?

Yes

 
 #4 - If a user creates a new configuration-group with values equal to
that of the default configuration-group, what is the expected
behavior?

Im not sure thats an issue. You will select your config group, and it will be 
the one used. I believe you are talking the difference between the template 
thats used to set up values for the instance, and the config options that users 
are allowed to edit. Those are going to be appended, so to speak, to the 
existing template. Itll be up to the server software to define what order 
values, if duplicated, are read / used.

 
 #5 - For GET /configuration/parameters, where is the list of supported
parameters and their metadata sourced from?

i believe its a db table… someone may have to correct me there.

 
 #6 - Should a user be able to reset a configuration-group to the
current default configuration-group?

Yes, assuming we have a default config group, and im not sure we have a 
concept of that. We have what the install creates, the templated config file. 
Removing the association of your config from the instance will do this thought.

 
 #7 - Is a new configuration-group a clone of the then current default
configuration-group with various changes, or will inheritence be
utilized?

I think clone will be saner for now. But you can edit your group with a PATCH, 
and that will not clone it. See [1] first paragraph.

 
 #8 - How should the state of pending configuration-group changes be
reflected in GET /instances/:id ? How will this state be
persisted?

You are talking about changes that require a restart i believe. I think this 
falls into the same category as our conversation about minor version updates. 
We can have a pretty generic restart required somewhere there.

 
 #9 - Reminder: Once multiple service-types and versions are supported,
the configuration-group will need a service-type field.

Most def. You will only be able to assign relevant configs to their 
service-types, and the /configuration/parameters will need to be typed too.

 
 #10 - Should dynamic values (via functions and operators) in
  configuration-groups be supported?
  Example: innodb_buffer_pool_size = 150 * flavor['ram']/512

H. This is quite interesting. But no, not v1. I totally agree w/ the 
nice-to-have. Good idea though, we should add it to the blueprint.

 
 My Thoughts:
 
 #1 - Yes
 #2 - Actualized
 #3 - Yes
 #4 - Depends on whether the approach for configuration-groups is to
clone or to inherit.
 #5 - ?
 #6 - Yes
 #7 - ?
 #8 - ?
 #9 - N/A
 #10 - In the first iteration of this feature I don't think it's an
  absolute necessity, but it's definitely a nice-to-have. The
  design/implementation should not preclude this from being easily
  added in the future.
 
 Where ? == I'd like to think about it a bit more, but I have a gut
 feeling
 
 Thoughts?

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-October/015919.html


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev