Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
+1 On Tue, Aug 26, 2014 at 8:54 AM, Tim Simpson tim.simp...@rackspace.com wrote: +1 From: Sergey Gotliv [sgot...@redhat.com] Sent: Tuesday, August 26, 2014 8:11 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove] datastore migration issues
There is the database migration for datastores. We should add a function to back fill the existing data with either a dummy data or set it to 'mysql' as that was the only possibility before data stores. On Dec 18, 2013 3:23 PM, Greg Hill greg.h...@rackspace.com wrote: I've been working on fixing a bug related to migrating existing installations to the new datastore code: https://bugs.launchpad.net/trove/+bug/1259642 The basic gist is that existing instances won't have any data in the datastore_version_id field in the database unless we somehow populate that data during migration, and not having that data populated breaks a lot of things (including the ability to list instances or delete or resize old instances). It's impossible to populate that data in an automatic, generic way, since it's highly vendor-dependent on what database and version they currently support, and there's not enough data in the older schema to populate the new tables automatically. So far, we've come up with some non-optimal solutions: 1. The first iteration was to assume 'mysql' as the database manager on instances without a datastore set. 2. The next iteration was to make the default value be configurable in trove.conf, but default to 'mysql' if it wasn't set. 3. It was then proposed that we could just use the 'default_datastore' value from the config, which may or may not be set by the operator. My problem with any of these approaches beyond the first is that requiring people to populate config values in order to successfully migrate to the newer code is really no different than requiring them to populate the new database tables with appropriate data and updating the existing instances with the appropriate values. Either way, it's now highly dependent on people deploying the upgrade to know about this change and react accordingly. Does anyone have a better solution that we aren't considering? Is this even worth the effort given that trove has so few current deployments that we can just make sure everyone is populating the new tables as part of their upgrade path and not bother fixing the code to deal with the legacy data? Greg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove] datastore migration issues
I think that we need to be good citizens and at least add dummy data. Because it is impossible to know who all is using this, the list you have is probably complete. But Trove has been available for quite some time and all these users will not be listening on this thread. Basically anytime you have a database migration that adds a required field you *have* to alter the existing rows. If we don't we're basically telling everyone who upgrades that we the 'Database as a Service' team don't care about data integrity in our own product :) Robert On Thu, Dec 19, 2013 at 9:25 AM, Greg Hill greg.h...@rackspace.com wrote: We did consider doing that, but decided it wasn't really any different from the other options as it required the deployer to know to alter that data. That would require the fewest code changes, though. It was also my understanding that mysql variants were a possibility as well (percona and mariadb), which is what brought on the objection to just defaulting in code. Also, we can't derive the version being used, so we *could* fill it with a dummy version and assume mysql, but I don't feel like that solves the problem or the objections to the earlier solutions. And then we also have bogus data in the database. Since there's no perfect solution, I'm really just hoping to gather consensus among people who are running existing trove installations and have yet to upgrade to the newer code about what would be easiest for them. My understanding is that list is basically HP and Rackspace, and maybe Ebay?, but the hope was that bringing the issue up on the list might confirm or refute that assumption and drive the conversation to a suitable workaround for those affected, which hopefully isn't that many organizations at this point. The options are basically: 1. Put the onus on the deployer to correct existing records in the database. 2. Have the migration script put dummy data in the database which you have to correct. 3. Put the onus on the deployer to fill out values in the config value Greg On Dec 18, 2013, at 8:46 PM, Robert Myers myer0...@gmail.com wrote: There is the database migration for datastores. We should add a function to back fill the existing data with either a dummy data or set it to 'mysql' as that was the only possibility before data stores. On Dec 18, 2013 3:23 PM, Greg Hill greg.h...@rackspace.com wrote: I've been working on fixing a bug related to migrating existing installations to the new datastore code: https://bugs.launchpad.net/trove/+bug/1259642 The basic gist is that existing instances won't have any data in the datastore_version_id field in the database unless we somehow populate that data during migration, and not having that data populated breaks a lot of things (including the ability to list instances or delete or resize old instances). It's impossible to populate that data in an automatic, generic way, since it's highly vendor-dependent on what database and version they currently support, and there's not enough data in the older schema to populate the new tables automatically. So far, we've come up with some non-optimal solutions: 1. The first iteration was to assume 'mysql' as the database manager on instances without a datastore set. 2. The next iteration was to make the default value be configurable in trove.conf, but default to 'mysql' if it wasn't set. 3. It was then proposed that we could just use the 'default_datastore' value from the config, which may or may not be set by the operator. My problem with any of these approaches beyond the first is that requiring people to populate config values in order to successfully migrate to the newer code is really no different than requiring them to populate the new database tables with appropriate data and updating the existing instances with the appropriate values. Either way, it's now highly dependent on people deploying the upgrade to know about this change and react accordingly. Does anyone have a better solution that we aren't considering? Is this even worth the effort given that trove has so few current deployments that we can just make sure everyone is populating the new tables as part of their upgrade path and not bother fixing the code to deal with the legacy data? Greg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http
Re: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core
+1 On Fri, Dec 27, 2013 at 4:48 PM, Michael Basnight mbasni...@gmail.comwrote: Howdy, Im proposing Auston McReynolds (amcrn) to trove-core. Auston has been working with trove for a while now. He is a great reviewer. He is incredibly thorough, and has caught more than one critical error with reviews and helps connect large features that may overlap (config edits + multi datastores comes to mind). The code he submits is top notch, and we frequently ask for his opinion on architecture / feature / design. https://review.openstack.org/#/dashboard/8214 https://review.openstack.org/#/q/owner:8214,n,z https://review.openstack.org/#/q/reviewer:8214,n,z Please respond with +1/-1, or any further comments. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] how to list available configuration parameters for datastores
I like #4 over #5 because it seems weird to have to create a configuration first to see what parameters are allowed. With #4 you could look up what is allowed first then create your configuration. Robert On Jan 22, 2014 10:18 AM, Craig Vyvial cp16...@gmail.com wrote: Hey everyone I have run into an issue with the configuration parameter URI. I'd like some input on what the URI might look like for getting the list configuration parameters for a specific datastore. Problem: Configuration parameters need to be selected per datastore. Currently: Its setup to use the default(mysql) datastore and this wont work for other datastores like redis/cassandra/etc. /configurations/parameters - parameter list for mysql /configurations/parameters/parameter_name - details of parameter We need to be able to request the parameter list per datastore. Here are some suggestions that outlines how each method may work. ONE: /configurations/parameters?datastore=mysql - list parameter for mysql /configurations/parameters?datastore=redis - list parameter for redis - we do not use query parameters for anything other than pagination (limit and marker) - this requires some finagling with the context to add the datastore. https://gist.github.com/cp16net/8547197 TWO: /configurations/parameters - list of datastores that have configuration parameters /configurations/parameters/datastore - list of parameters for datastore THREE: /datastores/datastore/configuration/parameters - list the parameters for the datastore FOUR: /datastores/datastore - add an href on the return to the configuration parameter list for the datastore /configurations/parameters/datastore - list of parameters for datastore FIVE: * Require a configuration be created with a datastore. Then a user may list the configuration parameters allowed on that configuration. /configurations/config_id/parameters - parameter list for mysql - after some thought i think this method (5) might be the best way to handle this. I've outlined a few ways we could make this work. Let me know if you agree or why you may disagree with strategy 5. Thanks, Craig Vyvial ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Trove] Templates in Trove
I'm pulling this conversation out of the gerrit review as I think it needs more discussion. https://review.openstack.org/#/c/53499/ I want to discuss the design decision to not use Jinja templates for the heat templates. My arguments for using Jinja for heat as well are: 1. We have to rewrite all the template loading logic. The current implementation is pretty simple but in order to make in production worthy it will need to handle many more edge cases as we use develop this feature. The main argument I have heard against using the existing ENV is that the path is hard coded. (This can and should be turned into a config flag) 2. We are already using Jinja templates for config files so it will be less confusing for a new person starting up. Why do these custom templates go here but these over here? Having one place to override defaults makes sense. 3. Looking at the current heat templates I could easily see some areas that could take advantage of being a real Jinja template, an admin could create a base template and extend that for each different service and just change a few values in each. 4. The default templates could be package with trove (using the Jijna PackageLoader) so the initial setup out of the box will work. If we go this route it would also be a good time to discuss the origination of the templates. Currently the templates are just in - trove/templates/{data_store}.config.template - trove/templates/{data_store}.heat.template I suggest that we move this into a folder structure like so: - trove/template/{data_store}/config.template - trove/template/{data_store}/heat.template - trove/template/{data_store}/the_next.template Thanks! Robert ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Templates in Trove
So I guess the only contention point is to either store the templates by type or by datastore. I don't see the use case where you'd have completely different paths for templates, so there is really no need for two separate template paths. My idea is to group the templates by data_store because as we add more data_stores the flat file structure will get harder to manage. So either: - templates/{data_store}/config - templates/{data_store}/heat or - templates/config/{data_store}.config - templates/heat/{data_store}.heat During lookup of the templates it is either: config_template = '%s/config.template' % service_type heat_template = '%s/heat.template' % service_type or config_template = 'config/%s.config.template' % service_type heat_template = 'heat/%s.heat.template' % service_type My perference is to group by data_store type, but I'm curious to what others think. Robert On Tue, Oct 29, 2013 at 10:15 AM, Denis Makogon dmako...@mirantis.comwrote: Robert, i also have thoughts about templates. Your suggestion is rather complex. Let me explain why is it so: With new datastore support you should update PackageLoader and FilesystemLoader with new filesystem path and package path. I would prefe more easy configuration and store it in next way: - templates/configuration/{datastore}.config.template; - templates/heat/{datastore}.heat.template. Heat templates would be static until in trove become super-complex in instance configuration like Savanna (Hadoop on OpenStack). What about jinja - ok , i agree to use it, but (!!!) we would not use it for heat template rendering, because templates are static. Trove is not so complex in instance configuration that is why it doesn't need to genereate/modify heat templates on-the-go. Please take a look at this one https://review.openstack.org/#/c/54315/ 2013/10/29 Robert Myers myer0...@gmail.com I'm pulling this conversation out of the gerrit review as I think it needs more discussion. https://review.openstack.org/#/c/53499/ I want to discuss the design decision to not use Jinja templates for the heat templates. My arguments for using Jinja for heat as well are: 1. We have to rewrite all the template loading logic. The current implementation is pretty simple but in order to make in production worthy it will need to handle many more edge cases as we use develop this feature. The main argument I have heard against using the existing ENV is that the path is hard coded. (This can and should be turned into a config flag) 2. We are already using Jinja templates for config files so it will be less confusing for a new person starting up. Why do these custom templates go here but these over here? Having one place to override defaults makes sense. 3. Looking at the current heat templates I could easily see some areas that could take advantage of being a real Jinja template, an admin could create a base template and extend that for each different service and just change a few values in each. 4. The default templates could be package with trove (using the Jijna PackageLoader) so the initial setup out of the box will work. If we go this route it would also be a good time to discuss the origination of the templates. Currently the templates are just in - trove/templates/{data_store}.config.template - trove/templates/{data_store}.heat.template I suggest that we move this into a folder structure like so: - trove/template/{data_store}/config.template - trove/template/{data_store}/heat.template - trove/template/{data_store}/the_next.template Thanks! Robert ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Templates in Trove
I hear you Clint. I'm not an expert on heat templates so it is possible to do this all with one. However I don't want to use Jinja to replace the heat template logic just for sane template loading. We are already using Jinja templates to load custom config files. So it makes sense to re-use the same loading mechanism to allow administrators to add their own custom heat templates in a well known location. I don't want to re-invent the wheel either. Robert On Tue, Oct 29, 2013 at 12:24 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Robert Myers's message of 2013-10-29 07:54:59 -0700: I'm pulling this conversation out of the gerrit review as I think it needs more discussion. https://review.openstack.org/#/c/53499/ After reading the comments in that review, it seems to me that you don't need a client side template for your Heat template. The only argument for templating is If I want some things to be custom I can't have them custom. You may not realize this, but Heat templates already have basic string replacement facilities and mappings, which is _all_ you need here. Use parameters. Pass _EVERYTHING_ into the stacks you're creating as parameters. Then let admins customize using Heat, not _another_ language. For instance, somebody brought up wanting to have UserData be customizable. It is like this now: UserData: Fn::Base64: Fn::Join: - '' - [#!/bin/bash -v\n, /opt/aws/bin/cfn-init\n, sudo service trove-guest start\n] Since you're using yaml, you don't have to se Fn::Join like in json, so simplify to this first: UserData: Fn::Base64: | #!/bin/bash -v /opt/aws/bin/cfn-init sudo service trove-guest start Now, the suggestion was that users might want to do a different prep per service_type. First, we need to make service_type a parameter Parameters: service_type: Type: String Default: mysql Now we need to shove it in where needed: Metadata: AWS::CloudFormation::Init: config: files: /etc/guest_info: content: Fn::Join: - '' - [[DEFAULT]\nguest_id=, {Ref: InstanceId}, \nservice_type=, {Ref: service_type}, ] mode: '000644' owner: root group: root Now, if a user wants to have a different script: Mappings: ServiceToScript: mysql: script: | #!/bin/bash -v /opt/aws/bin/cfn-init sudo service trove-guest start galera: script: | #!/bin/bash /opt/aws/bin/cfn-init galera-super-thingy sudo service trove-guest start And then they replace the userdata as such: UserData: Fn::FindInMap: - ServiceToScript - {Ref: service_type} - script Please can we at least _try_ not to reinvent things! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core
+1 On Tue, May 6, 2014 at 9:38 AM, Michael Basnight mbasni...@gmail.comwrote: On May 6, 2014, at 2:31 AM, Nikhil Manchanda nik...@manchanda.me wrote: Hello folks: I'm proposing to add Craig Vyvial (cp16net) to trove-core. Craig has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code to multiple features in Trove, and most recently drove the implementation of configuration groups in Icehouse. https://review.openstack.org/#/q/reviewer:%22Craig+Vyvial%22,n,z https://review.openstack.org/#/q/owner:%22Craig+Vyvial%22,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil Yes plz. +1. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev