Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core

2014-05-06 Thread Daniel Morris
+1


On 5/6/14, 4:31 AM, Nikhil Manchanda nik...@manchanda.me wrote:


Hello folks:

I'm proposing to add Craig Vyvial (cp16net) to trove-core.

Craig has been working with Trove for a while now. He has been a
consistently active reviewer, and has provided insightful comments on
numerous reviews. He has submitted quality code to multiple features in
Trove, and most recently drove the implementation of configuration
groups in Icehouse.

https://review.openstack.org/#/q/reviewer:%22Craig+Vyvial%22,n,z
https://review.openstack.org/#/q/owner:%22Craig+Vyvial%22,n,z

Please respond with +1/-1, or any further comments.

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

2013-12-30 Thread Daniel Morris
+1

From: Michael Basnight mbasni...@gmail.commailto:mbasni...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, December 27, 2013 4:48 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

Howdy,

Im proposing Auston McReynolds (amcrn) to trove-core.

Auston has been working with trove for a while now. He is a great reviewer. He 
is incredibly thorough, and has caught more than one critical error with 
reviews and helps connect large features that may overlap (config edits + multi 
datastores comes to mind). The code he submits is top notch, and we frequently 
ask for his opinion on architecture / feature / design.

https://review.openstack.org/#/dashboard/8214
https://review.openstack.org/#/q/owner:8214,n,z
https://review.openstack.org/#/q/reviewer:8214,n,z

Please respond with +1/-1, or any further comments.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-26 Thread Daniel Morris

From: Vipul Sabhaya vip...@gmail.commailto:vip...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, December 24, 2013 3:42 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers




On Mon, Dec 23, 2013 at 8:59 AM, Daniel Morris 
daniel.mor...@rackspace.commailto:daniel.mor...@rackspace.com wrote:
Vipul,

I know we discussed this briefly in the Wednesday meeting but I still have a 
few questions.   I am not bought in to the idea that we do not need to maintain 
the records of saved logs.   I agree that we do not need to enable users to 
download and manipulate the logs themselves via Trove ( that can be left to 
Swift), but at a minimum, I believe that the system will still need to maintain 
a mapping of where the logs are stored in swift.  This is a simple addition to 
the list of available logs per datastore (an additional field of its swift 
location – a location exists, you know the log has been saved).  If we do not 
do this, how then does the user know where to find the logs they have saved or 
if they even exist in Swift without searching manually?  It may be that this is 
covered, but I don't see this represented in the BP.  Is the assumption that it 
is some known path?  I would expect to see the Swift location retuned on a GET 
of the available logs types for a specific instance (there is currently only a 
top-level GET for logs available per datastore type).

The Swift location can be returned in the response to the POST/‘save’ 
operation.  We may consider returning a top-level immutable resource (like 
‘flavors’) that when queried, could return the Base path for logs in Swift.
As long as we have a way to programmatically obtain and build the base path to 
the logs on a per instance basis, that should be fine.

Logs are not meaningful to Trove, since you can’t act on them or perform other 
meaningful Trove operations on them.  Thus I don’t believe they qualify as a 
resource in Trove.  Multiple ‘save’ operations should not result in a replace 
of the previous logs, it should just add to what may already be there in Swift.

I am also assuming in this case, and per the BP, that If the user does not have 
the ability to select the storage location in Swift of if this is controlled 
exclusively by the deployer.  And that you would only allow one occurrence of 
the log, per datastore / instance and that the behavior of writing a log more 
than once to the same location is that it will overwrite / append, but it is 
not detailed in the BP.

The location should be decided by Trove, not the user.  We’ll likely need to 
group them in Swift by InstanceID buckets.  I don’t believe we should do 
appends/overwrites - new Logs saved would just add to what may already exist.  
If the user chooses they don’t need the logs, they can perform the delete 
directly in Swift.


Thanks,
Daniel
From: Vipul Sabhaya vip...@gmail.commailto:vip...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, December 20, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers

Yep agreed, this is a great idea.

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These 
entries, treated as a Trove resource aren’t useful, since you don’t actually 
manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to 
delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight 
mbasni...@gmail.commailto:mbasni...@gmail.com wrote:
I think this is a good idea and I support it. In todays meeting [1] there were 
some questions, and I encourage them to get brought up here. My only question 
is in regard to the tail of a file we discussed in irc. After talking about 
it w/ other trovesters, I think it doesnt make sense to tail the log for most 
datstores. I cant imagine finding anything useful in say, a java, applications 
last 100 lines (especially if a stack trace was present). But I dont want to 
derail, so lets try to focus on the deliver to swift first option.

[1] 
http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon 
dmako...@mirantis.commailto:dmako...@mirantis.com wrote:

Greetings, OpenStack DBaaS

Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-23 Thread Daniel Morris
Vipul,

I know we discussed this briefly in the Wednesday meeting but I still have a 
few questions.   I am not bought in to the idea that we do not need to maintain 
the records of saved logs.   I agree that we do not need to enable users to 
download and manipulate the logs themselves via Trove ( that can be left to 
Swift), but at a minimum, I believe that the system will still need to maintain 
a mapping of where the logs are stored in swift.  This is a simple addition to 
the list of available logs per datastore (an additional field of its swift 
location – a location exists, you know the log has been saved).  If we do not 
do this, how then does the user know where to find the logs they have saved or 
if they even exist in Swift without searching manually?  It may be that this is 
covered, but I don't see this represented in the BP.  Is the assumption that it 
is some known path?  I would expect to see the Swift location retuned on a GET 
of the available logs types for a specific instance (there is currently only a 
top-level GET for logs available per datastore type).

I am also assuming in this case, and per the BP, that If the user does not have 
the ability to select the storage location in Swift of if this is controlled 
exclusively by the deployer.  And that you would only allow one occurrence of 
the log, per datastore / instance and that the behavior of writing a log more 
than once to the same location is that it will overwrite / append, but it is 
not detailed in the BP.

Thanks,
Daniel
From: Vipul Sabhaya vip...@gmail.commailto:vip...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, December 20, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers

Yep agreed, this is a great idea.

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These 
entries, treated as a Trove resource aren’t useful, since you don’t actually 
manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to 
delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight 
mbasni...@gmail.commailto:mbasni...@gmail.com wrote:
I think this is a good idea and I support it. In todays meeting [1] there were 
some questions, and I encourage them to get brought up here. My only question 
is in regard to the tail of a file we discussed in irc. After talking about 
it w/ other trovesters, I think it doesnt make sense to tail the log for most 
datstores. I cant imagine finding anything useful in say, a java, applications 
last 100 lines (especially if a stack trace was present). But I dont want to 
derail, so lets try to focus on the deliver to swift first option.

[1] 
http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon 
dmako...@mirantis.commailto:dmako...@mirantis.com wrote:

Greetings, OpenStack DBaaS community.


I'd like to start discussion around a new feature in Trove. The feature I 
would like to propose covers manipulating  database log files.


Main idea. Give user an ability to retrieve database log file for any 
purposes.

Goals to achieve. Suppose we have an application (binary application, 
without source code) which requires a DB connection to perform data 
manipulations and a user would like to perform development, debbuging of an 
application, also logs would be useful for audit process. Trove itself provides 
access only for CRUD operations inside of database, so the user cannot access 
the instance directly and analyze its log files. Therefore, Trove should be 
able to provide ways to allow a user to download the database log for analysis.


Log manipulations are designed to let user perform log investigations. 
Since Trove is a PaaS - level project, its user cannot interact with the 
compute instance directly, only with database through the provided API 
(database operations).

I would like to propose the following API operations:

  1.  Create DBLog entries.

  2.  Delete DBLog entries.

  3.  List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki 
page. [1]

[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight


Re: [openstack-dev] [trove] configuration groups and datastores type/versions

2013-12-13 Thread Daniel Morris
Good point...

In this case however, couldn't you solve this by simply allowing the user
to specify a list of multiple id's for both the datastore-id's and
datastore-version-id's?  That way the user can directly control which
configurations apply to different types and versions (choosing to apply
0,1, or many).  I am not sure how the provider would be able to directly
manage those on behalf of the user as they would not know which options
actually apply across the different types and versions (unless that too
was maintained).  I could be misunderstanding your proposal though.

Daniel







On 12/12/13 6:02 PM, McReynolds, Auston amcreyno...@ebay.com wrote:

Another Example:

  Datastore Type | Version
  -
  MySQL 5.5  | 5.5.35
  MySQL 5.5  | 5.5.20
  MySQL 5.6  | 5.6.15
  

A user creates a MySQL 5.5 configuration-group that merely consists
of a innodb_buffer_pool_size override. The innodb_buffer_pool_size
parameter is still featured in MySQL 5.6, so arguably the
configuration-group should work with MySQL 5.6 as well. If a
configuration-group can only be tied to a single datastore type
and/or a single datastore-version, this will not work.

To support all possible permutations, a compatibility list of sorts
has to be introduced.

Table: configuration_datastore_compatibility

  Name| Description
  --
  id| PrimaryKey, Generated UUID
  from_version_id | ForeignKey(datastore_version.id)
  to_version_id  | ForeignKey(datastore_version.id)

The cloud provider can then be responsible for updating the
compatibility table (via trove-manage) whenever a new version of a
datastore is introduced and has a strict superset of configuration
parameters as compared to previous versions.

On a related note, it would probably behoove us to consider how to
handle datastore migrations in relation to configuration-groups.
A rough-draft blueprint/gist for datastore migrations is located at
https://gist.github.com/amcrn/dfd493200fcdfdb61a23.


Auston

---

From:  Craig Vyvial cp16...@gmail.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Wednesday, December 11, 2013 8:52 AM
To:  OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [trove] configuration groups and
datastores type/versions


Configuration Groups is currently developed to associate the datastore
version with a configuration that is created. If a datastore version is
not presented it will use the default similar to the way instances are
created now. This looks like
 a way of associating the configuration with a datastore because an
instance has this same association.

Depending on how you setup your datastore types and versions this might
not be ideal.
Example:
Datastore Type | Version
-
Mysql  | 5.1
Mysql  | 5.5

Percona| 5.5
-

Configuration  | datastore_version
---
mysql-5.5-config   | mysql 5.5

percona-5.5-config | percona 5.5

---

or 

Datastore Type | Version
-
Mysql 5.1  | 5.1.12
Mysql 5.1  | 5.1.13

Mysql  | 5.5.32

Percona| 5.5.44
-


Configuration  | datastore_version
---
mysql-5.1-config   | mysql 5.5

percona-5.5-config | percona 5.5

---



Notice that if you associate the configuration with a datastore version
then in the latter example you will not be able to use the same
configurations that you created with different minor versions of the
datastore. 

Something that we should consider is allowing a configuration to be
associated with a just a datastore type (eg. Mysql 5.1) so that any
versions of 5.1 should allow the same configuration to be applied.

I do not view this as a change that needs to happen before the current
code is merged but more as an additive feature of configurations.


*snippet from Morris and I talking about this*
Given the nature of how the datastore / types code has been implemented in
that it is highly configurable, I believe that we we need to adjust the
way in which we are associating configuration groups with datastore types
and versions.  The main
 use case that I am considering here is that as a user of the API, I want
to be able to associate configurations with a specific datastore type so
that I can easily return a list of the configurations that are valid for
that database type (Example: Get me a
 list of configurations for MySQL 5.6).   We know that configurations will
vary across types (MySQL vs. Redis) as well as across major versions
(MySQL 5.1 vs MySQL 5.6).   Presently,