[openstack-dev] [Cue] PTL Candidacy

2015-09-16 Thread Vipul Sabhaya
Hello,

This will be the first official PTL election for Cue, and I would be
honored to serve another term leading the project for the M Cycle.

Cue is a relatively new project in Openstack, and was approved into the Big
Tent during Liberty.  I have been involved with Cue from the beginning,
from initial POC to having a product that is production worthy.  In my past
life, i’ve been a member of the Trove Core team.

During the Liberty Cycle, Cue has become a solid product with a control
plane that can manage per-tenant RabbitMQ Clusters.  We spent a lot of time
beefing up our tests, and boast >90% unit test coverage.  We also added
Tempest tests, and Rally tests, including gating jobs for both.  Our
documentation has also been revamped considerably, and allows new
contributors to ramp up quickly with the project.

During the M cycle, I would like to focus on building a community and
getting additional contributors to Cue.  I would also like to focus the
team on multi-broker support, including adding Kafka as a broker that is
managed by Cue.

I believe Cue has come a long ways in a short time, and going forward have
the opportunity accelerate the growth in terms of features, quality, and
adoption.

Thanks for your consideration!
-Vipul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my concerns

2015-09-11 Thread Vipul Sabhaya
Thanks for starting this thread Josh.

On Fri, Sep 11, 2015 at 12:26 PM, Joshua Harlow 
wrote:

> Hi all,
>
> I was reading over the TC IRC logs for this week (my weekly reading) and I
> just wanted to let my thoughts and comments be known on:
>
>
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309
>
> I feel it's very important to send a positive note for new/upcoming
> projects and libraries... (and for everyone to remember that most projects
> do start off with a small set of backers). So I just wanted to try to
> ensure that we send a positive note with any tag like this that gets
> created and applied and that we all (especially the TC) really really
> considers the negative connotations of applying that tag to a project (it
> may effectively ~kill~ that project).
>
> Completely agree. Projects that don’t automatically fit into the
‘stater-kit’ type of tag (e.g. Cue) are going to take longer to really
build a community.  It doesn’t mean that the project isn’t active, or that
the team is not willing to fix bugs, or that operators should be afraid to
run it.


> I would really appreciate that instead of just applying this tag (or other
> similarly named tag to projects) that instead the TC try to actually help
> out projects with those potential tags in the first place (say perhaps by
> actively listing projects that may need more contributors from a variety of
> companies on the openstack blog under say a 'HELP WANTED' page or
> something). I'd much rather have that vs. any said tags, because the latter
> actually tries to help projects, vs just stamping them with a 'you are bad,
> figure out how to fix yourself, because you are not diverse' tag.
>
>
+1.  If the TC can play a role in helping projects build their community, a
lot more of the smaller projects would be much more successful.


> I believe it is the TC job (in part) to help make the community better,
> and not via tags like this that IMHO actually make it worse; I really hope
> that folks on the TC can look back at their own projects they may have
> created and ask how would their own project have turned out if they were
> stamped with a similar tag...
>
> - Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][Cue] Add Cue to OpenStack

2015-06-12 Thread Vipul Sabhaya
Hello OpenStack TC and stackers!

We’ve submitted a patch to add Cue to the OpenStack project list.

https://review.openstack.org/#/c/191173/

For those not familiar, Cue is a Message Broker Provisioning and Lifecycle
Management service for OpenStack.  We’ve focused initially on RabbitMQ, and
are starting to look into adding Kafka clusters.

We’ve already made lots of progress:
- v1 API for cluster management
- DevStack integration
- Tempest tests and gate
- Rally scenarios and gate
- Extensive developer and deployment documentation [1]

Please reach out if there are any questions.  You can also find us on
#openstack-cue.

Thanks!
-Vipul

[1]: http://cue.readthedocs.org/en/latest/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stepping Down from Trove Core

2015-05-25 Thread Vipul Sabhaya
Having not been very active in Trove for the past few months, it’s time to
step down from the core team.

I’ll be focusing primarily on Cue going forward, getting it included into
the Big Tent, and making it production worthy.

From RedDwarf to Trove to Onwards and Upwards!

-Vipul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cue] contributors meetup

2015-05-22 Thread Vipul Sabhaya
The Cue team will be crashing the Designate contributors meetup Friday at
1:20pm, in room 214.

Come find us there!

-vipul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cue] Design Session

2015-05-21 Thread Vipul Sabhaya
Hi!

Cue is holding a design session to talk about some of the priorities for
Liberty, Thursday from 3:20pm to 4:00pm at the couches outside of room 220.

https://etherpad.openstack.org/p/liberty-cue-design

Come and join us to learn more!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

2015-04-20 Thread Vipul Sabhaya
On Mon, Apr 20, 2015 at 12:07 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Another parallel is Manilla vs Swift. Both provides something like a share
 for users to store files.

 The former is a multitenant api to provision non multitenant file shares.
 The latter is a multitenant api to provide file sharing.

 Cue is a multitenant api to provision non multitenant queues.
 Zaqar is an api for a multitenant queueing system.

 They are complimentary services.


Agreed, it’s not an either/or, there is room for both.  While Cue could
provision Zaqar, it doesn’t make sense, since it is already multi-tenant.
As has been said, Cue’s goal is to bring non-multi-tenant message brokers
to the cloud.

On the question of adoption, what confuses me is why the measurement of
success of a project is whether other OpenStack services are integrating or
not.  Zaqar exposes an API that seems best fit for application workloads
running on an OpenStack cloud.  The question should be raised to operators
as to what’s preventing them from running Zaqar in their public cloud,
distro, or whatever.

Looking at other services that we consider to be successful, such as Trove,
we did not attempt to integrate with other OpenStack projects.  Rather, we
solved the concerns that operators had.



 Thanks,
 Kevin
 
 From: Ryan Brown [rybr...@redhat.com]
 Sent: Monday, April 20, 2015 11:38 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

 On 04/20/2015 02:22 PM, Michael Krotscheck wrote:
  What's the difference between openstack/zaqar and stackforge/cue?
  Looking at the projects, it seems like zaqar is a ground-up
  implementation of a queueing system, while cue is a provisioning api for
  queuing systems that could include zaqar, but could also include rabbit,
  zmq, etc...
 
  If my understanding of the projects is correct, the latter is far more
  versatile, and more in line with similar openstack approaches like
  trove. Is there a use case nuance I'm not aware of that warrants
  duplicating efforts? Because if not, one of the two should be retired
  and development focused on the other.
 
  Note: I do not have a horse in this race. I just feel it's strange that
  we're building a thing that can be provisioned by the other thing.
 

 Well, with Trove you can provision databases, but the MagnetoDB project
 still provides functionality that trove won't.


 The Trove : MagnetoDB and Cue : Zaqar comparison fits well.

 Trove provisions one instance of X (some database) per tenant, where
 MagnetoDB is one instance (collection of hosts to do database things)
 that serves many tenants.

 Cue's goal is I have a not-very-multitenant message bus (rabbit, or
 whatever) and makes that multitenant by provisioning one per tenant,
 while Zaqar has a single install (of as many machines as needed) to
 support messaging for all cloud tenants. This enables great stuff like
 cross-tenant messaging, better physical resource utilization in
 sparse-tenant cases, etc.

 As someone who wants to adopt Zaqar, I'd really like to see it continue
 as a project because it provides things other message broker approaches
 don't.

 --
 Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-08 Thread Vipul Sabhaya
On Wed, Apr 8, 2015 at 4:45 PM, Min Pae sputni...@gmail.com wrote:



 an under-the-clould service ? - That is not what I am after here.

 I think the thread went off on a tangent and this point got lost.  A user
 facing notification system absolutely should be a web centric protocol, as
 I imagine one of the big consumers of such a system will be monitoring
 dashboards which is trending more and more toward rich client side “Single
 Page Applications”.  AMQP would not work well in such cases.



 So is the yagi + atom hopper solution something we can point end-users to?
 Is it per-tenant etc...


 While I haven’t seen it yet, if that solution provides a means to expose
 the atom events to end users, it seems like a promising start.  The thing
 that’s required, though, is authentication/authorization that’s tied in to
 keystone, so that notification regarding a tenant’s resource is available
 only to that tenant.


 Sandy, do you have a write up somewhere on how to set this up so I can
 experiment a bit?

 Maybe this needs to be a part of Cue?


 Sorry, Cue’s goal is to provision Message Queue/Broker services and manage
 them, just like Trove provisions and manages databases.  Cue would be
 ideally used to stand up and scale the RabbitMQ cluster providing messaging
 for an application backend, but it does not provide messaging itself (that
 would be Zaqar).



Agree — I don’t think a multi-tenant notification service (which we seem to
be after here) is the goal of Cue.

That said, Monasca https://wiki.openstack.org/wiki/Monasca seems have
implemented the collection, aggregation, and notification of these events.
What may be missing is in Monasca is a mechanism for the tenant to consume
these events via something other than AMQP.



 - Min

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (min pae)

2015-03-23 Thread Vipul Sabhaya
+1

Congrats Min!

On Mon, Mar 23, 2015 at 10:40 AM, Joshua Harlow harlo...@outlook.com
wrote:

 Greetings all stackers,

 I propose that we add Min Pae[1] to the taskflow-core team[2].

 Min has been actively contributing to taskflow for a while now, both in
 helping prove taskflow out (by being a user via the project that he is
 using it in @ https://wiki.openstack.org/wiki/Cue) and helping with the
 review load when he can. He has provided quality reviews and is doing an
 awesome job with the various taskflow concepts and helping make taskflow
 the best library it can be!

 Overall I think he would make a great addition to the core review team.

 Please respond with +1/-1.

 Thanks much!

 --

 Joshua Harlow

 It's openstack, relax... | harlo...@yahoo-inc.com

 [1] https://launchpad.net/~sputnik13
 [2] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Core reviewer update

2015-02-05 Thread Vipul Sabhaya
+1 to all the nominations.  Many thanks to the departing cores for their 
contributions and bringing Trove to where it is today.


 On Feb 5, 2015, at 9:02 AM, Craig Vyvial cp16...@gmail.com wrote:
 
 +1 +1 +1
 I think these nominations will help grow the trove community. 
 
 -Craig
 
 On Thu, Feb 5, 2015 at 10:48 AM, Amrith Kumar amr...@tesora.com wrote:
 Nikhil,
 
  
 
 Regarding your nomination of Victoria, Peter and Edmond to core, here is my 
 vote (here are my votes).
 
  
 
 Victoria: +1
 
 Peter: +1
 
 Edmond: +1
 
  
 
 My thanks to all of you for your contributions to the project thus far, and 
 I look forward to working with all of you moving forward.
 
  
 
 Also, my sincere thanks to Michael (Bas) Basnight and Tim (grapex) Simpson. 
 It has been awesome working with both of you and look forward to working 
 together again!
 
  
 
 Thanks,
 
  
 
 -amrith
 
  
 
  
 
 --
 
  
 
 Amrith Kumar, CTO Tesora (www.tesora.com)
 
  
 
 Twitter: @amrithkumar 
 
 IRC: amrith @freenode
 
  
 
  
 
  
 
  
 
  
 
 From: Nikhil Manchanda [mailto:slick...@gmail.com] 
 Sent: Thursday, February 05, 2015 11:27 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Trove] Core reviewer update
 
  
 
 Hello Trove folks:
 
 Keeping in line with other OpenStack projects, and attempting to keep
 the momentum of reviews in Trove going, we need to keep our core-team up
 to date -- folks who are regularly doing good reviews on the code should
 be brought in to core and folks whose involvement is dropping off should
 be considered for removal since they lose context over time, not being
 as involved.
 
 For this update I'm proposing the following changes:
 - Adding Peter Stachowski (peterstac) to trove-core
 - Adding Victoria Martinez De La Cruz (vkmc) to trove-core
 - Adding Edmond Kotowski (edmondk) to trove-core
 - Removing Michael Basnight (hub_cap) from trove-core
 - Removing Tim Simpson (grapex) from trove-core
 
 For context on Trove reviews and who has been active, please see
 Russell's stats for Trove at:
 - http://russellbryant.net/openstack-stats/trove-reviewers-30.txt
 - http://russellbryant.net/openstack-stats/trove-reviewers-90.txt
 
 Trove-core members -- please reply with your vote on each of these
 proposed changes to the core team. Peter, Victoria and Eddie -- please
 let me know of your willingness to be in trove-core. Michael, and Tim --
 if you are planning on being substantially active on Trove in the near
 term, also please do let me know.
 
 Thanks,
 Nikhil
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cue] project update

2014-11-26 Thread Vipul Sabhaya
Hello,

Thanks to those who I met personally at the Summit for your feedback on the
project.

For those that don’t know what Cue is, we’re building a Message Broker
Provisioning service for Openstack.  More info can be found here:
https://wiki.openstack.org/wiki/Cue

Since the summit, we’re working full-steam ahead on our v1 API.  We are
also now on Stackforge, and leveraging Openstack CI and the gerrit review
process.

Come talk to us on #openstack-cue.

Useful Links:

V1 API - https://wiki.openstack.org/wiki/Cue/api
RTFD - http://cue.readthedocs.org/en/latest/
Code - https://github.com/stackforge/cue
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing Project Cue

2014-11-06 Thread Vipul Sabhaya
We will be meeting folks that are interested in discussing Cue at *10 AM on
Friday at the Trove Pod* in the “Program Pods” section of the Design Summit.

Looking forward to seeing folks there!
-Vipul
HP

On Tue, Nov 4, 2014 at 10:26 AM, Vipul Sabhaya vip...@gmail.com wrote:

 Hello Everyone,

 I would like to introduce Cue, a new Openstack project aimed at
 simplifying the application developer responsibilities by providing a
 managed service focused on provisioning and lifecycle management of
 message-oriented middleware services like RabbitMQ.

 Messaging is a common development pattern for building loosely coupled
 distributed systems. Provisioning and supporting Messaging Brokers for an
 individual application can be a time consuming and painful experience. This
 product aims to simplify the provisioning and management of message
 brokers, providing High Availability, management, and auto-healing
 capabilities to the end user, while providing tenant-level isolation.

 More details, including the scope of the project can be found here:
 https://wiki.openstack.org/wiki/Cue

 We’ve started writing code: https://github.com/vipulsabhaya/cue — the
 plan is to make it a Stackforge project in the coming weeks.

 I work for HP, and we’ve built a team within HP to build Cue.  I am in
 Paris for the Summit, and would appreciate feedback either on the mailing
 list or in person.

 If you are interested in helping build Cue, or have any questions/concerns
 around the project vision, I plan to host a meetup in the design summit
 area of the Le Meridien on *Friday morning*.  More details to come.

 Thanks!
 -Vipul Sabhaya
 HP





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Introducing Project Cue

2014-11-04 Thread Vipul Sabhaya
Hello Everyone,

I would like to introduce Cue, a new Openstack project aimed at simplifying
the application developer responsibilities by providing a managed service
focused on provisioning and lifecycle management of message-oriented
middleware services like RabbitMQ.

Messaging is a common development pattern for building loosely coupled
distributed systems. Provisioning and supporting Messaging Brokers for an
individual application can be a time consuming and painful experience. This
product aims to simplify the provisioning and management of message
brokers, providing High Availability, management, and auto-healing
capabilities to the end user, while providing tenant-level isolation.

More details, including the scope of the project can be found here:
https://wiki.openstack.org/wiki/Cue

We’ve started writing code: https://github.com/vipulsabhaya/cue — the plan
is to make it a Stackforge project in the coming weeks.

I work for HP, and we’ve built a team within HP to build Cue.  I am in
Paris for the Summit, and would appreciate feedback either on the mailing
list or in person.

If you are interested in helping build Cue, or have any questions/concerns
around the project vision, I plan to host a meetup in the design summit
area of the Le Meridien on *Friday morning*.  More details to come.

Thanks!
-Vipul Sabhaya
HP
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Iccha Sethi to trove-core

2014-10-30 Thread Vipul Sabhaya
+1


 On Oct 30, 2014, at 1:47 AM, Nikhil Manchanda nik...@manchanda.me wrote:
 
 Hello folks:
 
 I'm proposing to add Iccha Sethi (iccha on IRC) to trove-core.
 
 Iccha has been working with Trove for a while now. She has been a
 very active reviewer, and has provided insightful comments on
 numerous reviews. She has submitted quality code for multiple bug-fixes
 in Trove, and most recently drove the per datastore volume support BP in
 Juno. She was also a crucial part of the team that implemented
 replication in Juno, and helped close out multiple replication related
 issues during Juno-3.
 
 https://review.openstack.org/#/q/reviewer:iccha,n,z
 https://review.openstack.org/#/q/owner:iccha,n,z
 
 Please respond with +1/-1, or any further comments.
 
 Thanks,
 Nikhil
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Vipul Sabhaya
On Fri, Sep 19, 2014 at 4:23 AM, Eoghan Glynn egl...@redhat.com wrote:



  Hi All,
 
  My understanding of Zaqar is that it's like SQS. SQS uses distributed
 queues,
  which have a few unusual properties [0]:
  Message Order
 
 
  Amazon SQS makes a best effort to preserve order in messages, but due to
 the
  distributed nature of the queue, we cannot guarantee you will receive
  messages in the exact order you sent them. If your system requires that
  order be preserved, we recommend you place sequencing information in each
  message so you can reorder the messages upon receipt.
  At-Least-Once Delivery
 
 
  Amazon SQS stores copies of your messages on multiple servers for
 redundancy
  and high availability. On rare occasions, one of the servers storing a
 copy
  of a message might be unavailable when you receive or delete the
 message. If
  that occurs, the copy of the message will not be deleted on that
 unavailable
  server, and you might get that message copy again when you receive
 messages.
  Because of this, you must design your application to be idempotent
 (i.e., it
  must not be adversely affected if it processes the same message more than
  once).
  Message Sample
 
 
  The behavior of retrieving messages from the queue depends whether you
 are
  using short (standard) polling, the default behavior, or long polling.
 For
  more information about long polling, see Amazon SQS Long Polling .
 
  With short polling, when you retrieve messages from the queue, Amazon SQS
  samples a subset of the servers (based on a weighted random distribution)
  and returns messages from just those servers. This means that a
 particular
  receive request might not return all your messages. Or, if you have a
 small
  number of messages in your queue (less than 1000), it means a particular
  request might not return any of your messages, whereas a subsequent
 request
  will. If you keep retrieving from your queues, Amazon SQS will sample
 all of
  the servers, and you will receive all of your messages.
 
  The following figure shows short polling behavior of messages being
 returned
  after one of your system components makes a receive request. Amazon SQS
  samples several of the servers (in gray) and returns the messages from
 those
  servers (Message A, C, D, and B). Message E is not returned to this
  particular request, but it would be returned to a subsequent request.
 
 
 
 
 
 
 
  Presumably SQS has these properties because it makes the system
 scalable, if
  so does Zaqar have the same properties (not just making these same
  guarantees in the API, but actually having these properties in the
  backends)? And if not, why? I looked on the wiki [1] for information on
  this, but couldn't find anything.

 The premise of this thread is flawed I think.

 It seems to be predicated on a direct quote from the public
 documentation of a closed-source system justifying some
 assumptions about the internal architecture and design goals
 of that closed-source system.

 It then proceeds to hold zaqar to account for not making
 the same choices as that closed-source system.

 This puts the zaqar folks in a no-win situation, as it's hard
 to refute such arguments when they have no visibility over
 the innards of that closed-source system.

 Sure, the assumption may well be correct that the designers
 of SQS made the choice to expose applications to out-of-order
 messages as this was the only practical way of acheiving their
 scalability goals.

 But since the code isn't on github and the design discussions
 aren't publicly archived, we have no way of validating that.

 Would it be more reasonable to compare against a cloud-scale
 messaging system that folks may have more direct knowledge
 of?

 For example, is HP Cloud Messaging[1] rolled out in full
 production by now?


Unfortunately the HP Cloud Messaging service was decommissioned.


 Is it still cloning the original Marconi API, or has it kept
 up with the evolution of the API? Has the nature of this API
 been seen as the root cause of any scalability issues?


We created a RabbitMQ backed implementation that aimed to be API compatible
with Marconi.  This proved difficult given some of the API issues that have
been discussed on this very thread.  Our implementation could never be full
API compatible with Marconi (there really isn’t an easy way to map AMQP to
HTTP, without losing serious functionality).

We also worked closely with the Marconi team, trying to get upstream to
support AMQP — the Marconi team also came to the same conclusion that their
API was not a good fit for such a backend.

Now — we are looking at options.  One that intrigues us has also been
suggested on these threads, specifically building a ‘managed messaging
service’ that could provision various messaging technologies (rabbit,
kafka, etc), and at the end of the day hand off the protocol native to the
messaging technology to the end user.



 Cheers,
 Eoghan

 [1]
 

Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core

2014-08-26 Thread Vipul Sabhaya
+1


On Tue, Aug 26, 2014 at 11:43 AM, Robert Myers myer0...@gmail.com wrote:

 +1


 On Tue, Aug 26, 2014 at 8:54 AM, Tim Simpson tim.simp...@rackspace.com
 wrote:

 +1

 
 From: Sergey Gotliv [sgot...@redhat.com]
 Sent: Tuesday, August 26, 2014 8:11 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to
 trove-core

 Strong +1 from me!


  -Original Message-
  From: Nikhil Manchanda [mailto:nik...@manchanda.me]
  Sent: August-26-14 3:48 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to
 trove-core
 
  Hello folks:
 
  I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core.
 
  Amrith has been working with Trove for a while now. He has been a
  consistently active reviewer, and has provided insightful comments on
  numerous reviews. He has submitted quality code for multiple bug-fixes
 in
  Trove, and most recently drove the audit and clean-up of log messages
 across
  all Trove components.
 
  https://review.openstack.org/#/q/reviewer:amrith,n,z
  https://review.openstack.org/#/q/owner:amrith,n,z
 
  Please respond with +1/-1, or any further comments.
 
  Thanks,
  Nikhil
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core

2014-05-06 Thread Vipul Sabhaya
+1


On Tue, May 6, 2014 at 10:09 AM, McReynolds, Auston amcreyno...@ebay.comwrote:

 +1

 On 5/6/14, 2:31 AM, Nikhil Manchanda nik...@manchanda.me wrote:

 
 Hello folks:
 
 I'm proposing to add Craig Vyvial (cp16net) to trove-core.
 
 Craig has been working with Trove for a while now. He has been a
 consistently active reviewer, and has provided insightful comments on
 numerous reviews. He has submitted quality code to multiple features in
 Trove, and most recently drove the implementation of configuration
 groups in Icehouse.
 
 https://review.openstack.org/#/q/reviewer:%22Craig+Vyvial%22,n,z
 https://review.openstack.org/#/q/owner:%22Craig+Vyvial%22,n,z
 
 Please respond with +1/-1, or any further comments.
 
 Thanks,
 Nikhil
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-06 Thread Vipul Sabhaya
On Sun, Apr 6, 2014 at 9:36 AM, Russell Bryant rbry...@redhat.com wrote:

 On 04/06/2014 09:02 AM, Christopher Yeoh wrote:
  On Sun, Apr 6, 2014 at 10:06 AM, Hopper, Justin justin.hop...@hp.com
  mailto:justin.hop...@hp.com wrote:
 
  Russell,
 
  At this point the guard that Nova needs to provide around the
 instance
  does not need to be complex.  It would even suffice to keep those
  instances hidden from such operations as ³nova list² when invoked by
  directly by the user.
 
 
  Are you looking for something to prevent accidental manipulation of an
  instance created by Trove or intentional changes as well? Whilst doing
  some filtering in nova list is simple on the surface, we don't try to
  keep server uuids secret in the API, so its likely that sort of
  information will leak through other parts of the API say through volume
  or networking interfaces. Having to enforce another level of permissions
  throughout the API would be a considerable change. Also it would
  introduce inconsistencies into the information returned by Nova - eg
  does quota/usage information returned to the user include the server
  that Trove created or is that meant to be adjusted as well?
 
  If you need a high level of support from the Nova API to hide servers,
  then if its possible, as Russell suggests to get what you want by
  building on top of the Nova API using additional identities then I think
  that would be the way to go. If you're just looking for a simple way to
  offer to Trove clients a filtered list of servers, then perhaps Trove
  could offer a server list call which is a proxy to Nova and filters out
  the servers which are Trove specific since Trove knows which ones it
  created.

 Yeah, I would *really* prefer to go the route of having trove own all
 instances from the perspective of Nova.  Trove is what is really
 managing these instances, and it already has to keep track of what
 instances are associated with which user.

 Although this approach would work, there are some manageability issues
with it.  When trove is managing 100's of nova instances, then things tend
to break down when looking directly at the Trove tenant through the Nova
API and trying to piece together the associations, what resource failed to
provision, etc.


 It sounds like what you really want is for Trove to own the instances,
 so I think we need to get down to very specifically won't work with that
 approach.

 For example, is it a billing thing?  As it stands, all notifications for
 trove managed instances will have the user's info in them.  Do you not
 want to lose that?  If that's the problem, that seems solvable with a
 much simpler approach.


We have for the most part solved the billing issue since Trove does
maintain the association, and able to send events on-behalf of the correct
user.  We would lose out on the additional layer of checks that Nova
provides, such as Rate Limiting per project, Quotas enforced at the Nova
layer.  The trove tenant would essentially need full access without any
such limits.

Since we'd prefer to keep these checks at the Infrastructure layer intact
for Users that interact with the Trove API, I think the issue goes beyond
just filtering them out from the API.

One idea that we've floated around is possibly introducing a 'shadow'
tenant, that allows Services like Trove to create Nova / Cinder / Neutron
resources on behalf of the actual tenant.  The resources owned by this
shadow tenant would only be visible / manipulated by a higher-level
Service.  This could require some Service token to be provided along with
the original tenant token.

Example: POST /v2/{shadow_tenant_id}/servers


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-27 Thread Vipul Sabhaya
On Mon, Jan 27, 2014 at 1:43 PM, Don Kehn dek...@gmail.com wrote:

 check with the trove folks they might be testing percona.


We don’t test Nova or other Openstack pieces against Percona.  We do test
Percona as a underlying datastore within Trove though.



 On Mon, Jan 27, 2014 at 2:34 PM, Michael Still mi...@stillhq.com wrote:

 On Sat, Jan 25, 2014 at 5:32 AM, Tim Bell tim.b...@cern.ch wrote:



 We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
 have MariaDB as the default MySQL-like DB.



 Can someone summarise the status of the OpenStack in terms of



 -What MySQL-flavor is/are currently tested in the gate ?

 Turbo Hipster currently tests mysql and percona _upgrades_ for every
 commit in nova. We have noted no percona specific problems, except for
 percona being a little bit faster than mysql 5.5. It wouldn't be hard to
 add mariadb to the upgrade cycle if people were interested in that.

 However, we're not currently testing devstack with percona anywhere that
 I am aware of.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 
 Don Kehn
 303-442-0060

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

2013-12-30 Thread Vipul Sabhaya
+1

Sent from my iPhone

 On Dec 30, 2013, at 10:50 AM, Craig Vyvial cp16...@gmail.com wrote:
 
 +1
 
 
 On Mon, Dec 30, 2013 at 12:00 PM, Greg Hill greg.h...@rackspace.com wrote:
 +1
 
 On Dec 27, 2013, at 4:48 PM, Michael Basnight mbasni...@gmail.com wrote:
 
 Howdy,
 
 Im proposing Auston McReynolds (amcrn) to trove-core.
 
 Auston has been working with trove for a while now. He is a great reviewer. 
 He is incredibly thorough, and has caught more than one critical error with 
 reviews and helps connect large features that may overlap (config edits + 
 multi datastores comes to mind). The code he submits is top notch, and we 
 frequently ask for his opinion on architecture / feature / design.
 
 https://review.openstack.org/#/dashboard/8214
 https://review.openstack.org/#/q/owner:8214,n,z
 https://review.openstack.org/#/q/reviewer:8214,n,z
 
 Please respond with +1/-1, or any further comments.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-24 Thread Vipul Sabhaya
On Mon, Dec 23, 2013 at 8:59 AM, Daniel Morris
daniel.mor...@rackspace.comwrote:

   Vipul,

  I know we discussed this briefly in the Wednesday meeting but I still
 have a few questions.   I am not bought in to the idea that we do not need
 to maintain the records of saved logs.   I agree that we do not need to
 enable users to download and manipulate the logs themselves via Trove (
 that can be left to Swift), but at a minimum, I believe that the system
 will still need to maintain a mapping of where the logs are stored in
 swift.  This is a simple addition to the list of available logs per
 datastore (an additional field of its swift location – a location exists,
 you know the log has been saved).  If we do not do this, how then does the
 user know where to find the logs they have saved or if they even exist in
 Swift without searching manually?  It may be that this is covered, but I
 don't see this represented in the BP.  Is the assumption that it is some
 known path?  I would expect to see the Swift location retuned on a GET of
 the available logs types for a specific instance (there is currently only a
 top-level GET for logs available per datastore type).

 The Swift location can be returned in the response to the POST/‘save’
operation.  We may consider returning a top-level immutable resource (like
‘flavors’) that when queried, could return the Base path for logs in Swift.


Logs are not meaningful to Trove, since you can’t act on them or perform
other meaningful Trove operations on them.  Thus I don’t believe they
qualify as a resource in Trove.  Multiple ‘save’ operations should not
result in a replace of the previous logs, it should just add to what may
already be there in Swift.


  I am also assuming in this case, and per the BP, that If the user does
 not have the ability to select the storage location in Swift of if this is
 controlled exclusively by the deployer.  And that you would only allow one
 occurrence of the log, per datastore / instance and that the behavior of
 writing a log more than once to the same location is that it will overwrite
 / append, but it is not detailed in the BP.

 The location should be decided by Trove, not the user.  We’ll likely need
to group them in Swift by InstanceID buckets.  I don’t believe we should do
appends/overwrites - new Logs saved would just add to what may already
exist.  If the user chooses they don’t need the logs, they can perform the
delete directly in Swift.



   Thanks,
 Daniel
 From: Vipul Sabhaya vip...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, December 20, 2013 2:14 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [trove] Delivering datastore logs to
 customers

   Yep agreed, this is a great idea.

  We really only need two API calls to get this going:
 - List available logs to ‘save’
 - Save a log (to swift)

  Some additional points to consider:
  - We don’t need to create a record of every Log ‘saved’ in Trove.  These
 entries, treated as a Trove resource aren’t useful, since you don’t
 actually manipulate that resource.
 - Deletes of Logs shouldn’t be part of the Trove API, if the user wants to
 delete them, just use Swift.
 - A deployer should be able to choose which logs can be ‘saved’ by their
 users


 On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight mbasni...@gmail.comwrote:

  I think this is a good idea and I support it. In todays meeting [1]
 there were some questions, and I encourage them to get brought up here. My
 only question is in regard to the tail of a file we discussed in irc.
 After talking about it w/ other trovesters, I think it doesnt make sense to
 tail the log for most datstores. I cant imagine finding anything useful in
 say, a java, applications last 100 lines (especially if a stack trace was
 present). But I dont want to derail, so lets try to focus on the deliver
 to swift first option.

  [1]
 http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

  On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon dmako...@mirantis.comwrote:

  Greetings, OpenStack DBaaS community.

  I'd like to start discussion around a new feature in Trove. The
 feature I would like to propose covers manipulating  database log files.



  Main idea. Give user an ability to retrieve database log file for
 any purposes.

 Goals to achieve. Suppose we have an application (binary
 application, without source code) which requires a DB connection to perform
 data manipulations and a user would like to perform development, debbuging
 of an application, also logs would be useful for audit process. Trove
 itself provides access only for CRUD operations inside of database, so the
 user cannot access the instance directly and analyze its log files.
 Therefore, Trove should be able to provide ways to allow a user to download

Re: [openstack-dev] [trove] datastore migration issues

2013-12-20 Thread Vipul Sabhaya
I am fine with requiring the deployer to update default values, if they
don’t make sense for their given deployment.  However, not having any value
for older/existing instances, when the code requires it is not good.  So
let’s create a default datastore of mysql, with a default version, and set
that as the datastore for older instances.  A deployer can then run
trove-manage to update the default record created.


On Thu, Dec 19, 2013 at 6:14 PM, Tim Simpson tim.simp...@rackspace.comwrote:

  I second Rob and Greg- we need to not allow the instance table to have
 nulls for the datastore version ID. I can't imagine that as Trove grows and
 evolves, that edge case is something we'll always remember to code and test
 for, so let's cauterize things now by no longer allowing it at all.

  The fact that the migration scripts can't, to my knowledge, accept
 parameters for what the dummy datastore name and version should be isn't
 great, but I think it would be acceptable enough to make the provided
 default values sensible and ask operators who don't like it to manually
 update the database.

  - Tim



  --
 *From:* Robert Myers [myer0...@gmail.com]
 *Sent:* Thursday, December 19, 2013 9:59 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [trove] datastore migration issues

   I think that we need to be good citizens and at least add dummy data.
 Because it is impossible to know who all is using this, the list you have
 is probably complete. But Trove has been available for quite some time and
 all these users will not be listening on this thread. Basically anytime you
 have a database migration that adds a required field you *have* to alter
 the existing rows. If we don't we're basically telling everyone who
 upgrades that we the 'Database as a Service' team don't care about data
 integrity in our own product :)

  Robert


 On Thu, Dec 19, 2013 at 9:25 AM, Greg Hill greg.h...@rackspace.comwrote:

  We did consider doing that, but decided it wasn't really any different
 from the other options as it required the deployer to know to alter that
 data.  That would require the fewest code changes, though.  It was also my
 understanding that mysql variants were a possibility as well (percona and
 mariadb), which is what brought on the objection to just defaulting in
 code.  Also, we can't derive the version being used, so we *could* fill it
 with a dummy version and assume mysql, but I don't feel like that solves
 the problem or the objections to the earlier solutions.  And then we also
 have bogus data in the database.

   Since there's no perfect solution, I'm really just hoping to gather
 consensus among people who are running existing trove installations and
 have yet to upgrade to the newer code about what would be easiest for them.
  My understanding is that list is basically HP and Rackspace, and maybe
 Ebay?, but the hope was that bringing the issue up on the list might
 confirm or refute that assumption and drive the conversation to a suitable
 workaround for those affected, which hopefully isn't that many
 organizations at this point.

  The options are basically:

  1. Put the onus on the deployer to correct existing records in the
 database.
 2. Have the migration script put dummy data in the database which you
 have to correct.
 3. Put the onus on the deployer to fill out values in the config value

  Greg

  On Dec 18, 2013, at 8:46 PM, Robert Myers myer0...@gmail.com wrote:

  There is the database migration for datastores. We should add a
 function to  back fill the existing data with either a dummy data or set it
 to 'mysql' as that was the only possibility before data stores.
 On Dec 18, 2013 3:23 PM, Greg Hill greg.h...@rackspace.com wrote:

 I've been working on fixing a bug related to migrating existing
 installations to the new datastore code:

  https://bugs.launchpad.net/trove/+bug/1259642

  The basic gist is that existing instances won't have any data in the
 datastore_version_id field in the database unless we somehow populate that
 data during migration, and not having that data populated breaks a lot of
 things (including the ability to list instances or delete or resize old
 instances).  It's impossible to populate that data in an automatic, generic
 way, since it's highly vendor-dependent on what database and version they
 currently support, and there's not enough data in the older schema to
 populate the new tables automatically.

  So far, we've come up with some non-optimal solutions:

  1. The first iteration was to assume 'mysql' as the database manager
 on instances without a datastore set.
 2. The next iteration was to make the default value be configurable in
 trove.conf, but default to 'mysql' if it wasn't set.
 3. It was then proposed that we could just use the 'default_datastore'
 value from the config, which may or may not be set by the operator.

  My problem with any of these approaches 

Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-20 Thread Vipul Sabhaya
Yep agreed, this is a great idea.

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These
entries, treated as a Trove resource aren’t useful, since you don’t
actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to
delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their
users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight mbasni...@gmail.comwrote:

 I think this is a good idea and I support it. In todays meeting [1] there
 were some questions, and I encourage them to get brought up here. My only
 question is in regard to the tail of a file we discussed in irc. After
 talking about it w/ other trovesters, I think it doesnt make sense to tail
 the log for most datstores. I cant imagine finding anything useful in say,
 a java, applications last 100 lines (especially if a stack trace was
 present). But I dont want to derail, so lets try to focus on the deliver
 to swift first option.

 [1]
 http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

 On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon dmako...@mirantis.comwrote:

 Greetings, OpenStack DBaaS community.

 I'd like to start discussion around a new feature in Trove. The
 feature I would like to propose covers manipulating  database log files.


 Main idea. Give user an ability to retrieve database log file for any
 purposes.

 Goals to achieve. Suppose we have an application (binary application,
 without source code) which requires a DB connection to perform data
 manipulations and a user would like to perform development, debbuging of an
 application, also logs would be useful for audit process. Trove itself
 provides access only for CRUD operations inside of database, so the user
 cannot access the instance directly and analyze its log files. Therefore,
 Trove should be able to provide ways to allow a user to download the
 database log for analysis.


 Log manipulations are designed to let user perform log
 investigations. Since Trove is a PaaS - level project, its user cannot
 interact with the compute instance directly, only with database through the
 provided API (database operations).

 I would like to propose the following API operations:

1.

Create DBLog entries.
2.

Delete DBLog entries.
3.

List DBLog entries.

 Possible API, models, server, and guest configurations are described at
 wiki page. [1]

 [1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Michael Basnight

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-27 Thread Vipul Sabhaya
On Sun, Oct 27, 2013 at 4:29 PM, Ilya Sviridov isviri...@mirantis.comwrote:

 Totally agree, however current concept supposes working with type and
 version as different entities, even if it is  the attributes on one thing -
 the configuration.

 The reason for storing it as separate models can be cases when we are
 going to use them separately.

 Sounds really reasonable to keep it as one model, but another question
 comes to mind.
 How 'list datastore_type' will look like. That is important because API
 should be inambiguous.

 Following openstack tenets, each entity exposed via API has an id and can
 referenced by it.
 If we are storing datastore as one entity, we are not able to query
 versions or types only with their ids.

 But it is agreed as API

 /{tenant_id}/datastore_types
 /{tenant_id}/datastore_types/{datastore_type}/versions
 /{tenant_id}/datastore_types/versions/{id}





I am wondering why we even need the last route.
/{tenant_id}/datastore_types/versions/{id}

If we assume that a datastore_type is the parent resource of versions, we
could change that route to:
/{tenant_id}/datastore_types/{datastore_type}/versions/{id}.
 Although I don’t know if this route is even necessary - since listing all
available versions of a certain type is all users really need.

This will allow us to group the type and version, making version no longer
independent as Nikhil suggests.

So, with current concept it seems better to keep version and type as
 separate entities in database.


 With best regards,
 Ilya Sviridov

 http://www.mirantis.ru/


 On Fri, Oct 25, 2013 at 10:25 PM, Nikhil Manchanda nik...@manchanda.mewrote:


 It seems strange to me to treat both the datastore_type and version as
 two separate entities, when they aren't really independent of each
 other. (You can't really deploy a mysql type with a cassandra version,
 and vice-versa, so why have separate datastore-list and version-list
 calls?)

 I think it's a better idea to store in the db (and list) actual
 representations of the datastore type/versions that an image we can
 deploy supports. Any disambiguation could then happen based on what
 entries actually exist here.

 Let me illustrate what I'm trying to get at with a few examples:

 Database has:
 id | type  | version | active
 --
 a  | mysql | 5.6.14  |   1
 b  | mysql | 5.1.0   |   0
 c  | postgres  | 9.3.1   |   1
 d  | redis | 2.6.16  |   1
 e  | redis | 2.6.15  |   1
 f  | cassandra | 2.0.1   |   1
 g  | cassandra | 2.0.0   |   0

 Config specifies:
 default_datastore_id = a

 1. trove-cli instance create ...
 Just works - Since nothing is specified, this uses the
 default_datastore_id from the config (mysql 5.6.14 a) . No need for
 disambiguation.

 2. trove-cli instance create --datastore_id e
 The datastore_id specified always identifies a unique datastore type /
 version so no other information is needed for disambiguation. (In this
 case redis 2.6.15, identified by e)

 3. trove-cli instance create --datastore_type postgres
 The datastore_type in this case uniquely identifies postgres 9.3.1 c,
 so no disambiguation is necessary.

 4. trove-cli instance create --datastore_type cassandra
 In this case, there is only one _active_ datastore with the given
 datastore_type, so no further disambiguation is needed and cassandra
 2.0.1 f is uniquely identified.

 5. trove-cli instance create --datastore_type redis
 In this case, there are _TWO_ active versions of the specified
 datastore_type (2.6.16, and 2.6.17) so the call should return that
 further disambiguation _is_ needed.

 6. trove-cli instance create --datastore_type redis --datastore_version
 2.6.16
 We have both datastore_type and datastore_version, and that uniquely
 identifies redis 2.6.16 e. No further disambiguation is needed.

 7. trove-cli instance create --datastore_type cassandra --version 2.0.0,
 or trove-cli instance create --datastore_id g
 Here, we are attempting to deploy a datastore which is _NOT_ active and
 this call should fail with an appropriate error message.

 Cheers,
 -Nikhil


 Andrey Shestakov writes:

  2. it can be confusing coz not clear to what type version belongs
  (possible add type field in version).
  also if you have default type, then specified version recognizes as
  version of default type (no lookup in version.datastore_type_id)
  but i think we can do lookup in version.datastore_type_id before pick
  default.
 
  4. if default version is need, then it should be specified in db, coz
  switching via versions can be frequent and restart service to reload
  config all times is not good.
 
  On 10/21/2013 05:12 PM, Tim Simpson wrote:
  Thanks for the feedback Andrey.
 
   2. Got this case in irc, and decided to pass type and version
  together to avoid confusing.
  I don't understand how allowing the user to only pass the version
  would confuse anyone. Could you elaborate?
 
   3. Names of types and maybe versions can be good, but in irc

Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Vipul Sabhaya
On Mon, Oct 21, 2013 at 2:04 PM, Michael Basnight mbasni...@gmail.comwrote:


 On Oct 21, 2013, at 1:40 PM, Tim Simpson wrote:

  2. I also think a datastore_version alone should be sufficient since
 the associated datastore type will be implied:
 
  When i brought this up it was generally discussed as being confusing.
 Id like to use type and rely on having a default (or active) version behind
 the scenes.
 
  Can't we do both? If a user wants a specific version, most likely they
 had to enumerate all datastore_versions, spot it in a list, and grab the
 guid. Why force them to also specify the datastore_type when we can easily
 determine what that is?

 Fair enough.


It's not intuitive to the User, if they are specifying a version alone.
 You don't boot a 'version' of something, with specifying what that some
thing is.  I would rather they only specified the datastore_type alone, and
not have them specify a version at all.


 
  4. Additionally, in the current pull request to implement this it is
 possible to avoid passing a version, but only if no more than one version
 of the datastore_type exists in the database.
 
  I think instead the datastore_type row in the database should also
 have a default_version_id property, that an operator could update to the
 most recent version or whatever other criteria they wish to use, meaning
 the call could become this simple:
 
  Since we have determined from this email thread that we have an active
 status, and that  1 version can be active, we have to think about the
 precedence of active vs default. My question would be, if we have a
 default_version_id and a active version, what do we choose on behalf of the
 user? If there is  1 active version and a user does not specify the
 version, the api will error out, unless a default is defined. We also need
 a default_type in the config so the existing APIs can maintain
 compatibility. We can re-discuss this for v2 of the API.
 
  Imagine that an operator sets up Trove and only has one active version.
 They then somehow fumble setting up the default_version, but think they
 succeeded as the API works for users the way they expect anyway. Then they
 go to add another active version and suddenly their users get error
 messages.
 
  If we only use the default_version field of the datastore_type to
 define a default would honor the principle of least surprise.

 Are you saying you must have a default version defined to have  1 active
 versions?


I think it makes sense to have a 'Active' flag on every version -- and a
default flag for the version that should be used as a default in the event
the user doesn't specify.  It also makes sense to require the deployer to
set this accurately, and if one doesn't exist instance provisioning errors
out.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TROVE] Thoughts on DNS refactoring, Designate integration.

2013-10-01 Thread Vipul Sabhaya


 On Oct 1, 2013, at 3:37 PM, Michael Basnight mbasni...@gmail.com wrote:
 
 On Oct 1, 2013, at 3:06 PM, Ilya Sviridov isviri...@mirantis.com wrote:
 
 
 On Tue, Oct 1, 2013 at 6:45 PM, Tim Simpson tim.simp...@rackspace.com 
 wrote:
 Hi fellow Trove devs,
 
 With the Designate project ramping up, its time to refactor the ancient DNS 
 code that's in Trove to work with Designate.
 
 The good news is since the beginning, it has been possible to add new 
 drivers for DNS in order to use different services. Right now we only have 
 a driver for the Rackspace DNS API, but it should be possible to write one 
 for Designate as well.
 
 How it corelates with Trove dirrection to use HEAT for all provisioning and 
 managing cloud resources? 
 There are BPs for Designate resource 
 (https://blueprints.launchpad.net/heat/+spec/designate-resource) and 
 Rackspace DNS (https://blueprints.launchpad.net/heat/+spec/rax-dns-resource) 
 as well and it looks logically to use the HEAT for that.
 
 Currently Trove has logic for provisioning instances, dns driver, creation 
 of security group, but with switching to HEAT way, we have duplication of 
 the same functionality we have to support.
 
 +1 to using heat for this. However, as people are working on heat support 
 right now to make it more sound, if there is a group that wants/needs DNS 
 refactoring now, I'd say lets add it in. If no one is in need of changing 
 what's existing until we get better heat support, then we should just abandon 
 the review and leave the existing DNS code as is. 
 
 I would prefer, if there is no one in need, to abandon the exiting review and 
 add it to heat support. 
 

I would hate to wait til we have full Heat integration before getting Designate 
support, considering Heat does not yet have Designate support.  My vote is to 
move forward with a DNS driver in trove that can be deprecated once everything 
works with Heat.

As far as supporting only Designate, I would be fine with a driver interface 
that could potentially wrap Designate as well as Rax DNS.  Given that both will 
be somewhat temporary, I don't see a reason why we have to rip out rsdns at 
this point.

  
 
 However, there's a bigger topic here. In a gist sent to me recently by 
 Dennis M. with his thoughts on how this work should proceed, he included 
 the comment that Trove should *only* support Designate: 
 https://gist.github.com/crazymac/6705456/raw/2a16c7a249e73b3e42d98f5319db167f8d09abe0/gistfile1.txt
 
 I disagree. I have been waiting for a canonical DNS solution such as 
 Designate to enter the Openstack umbrella for years now, and am looking 
 forward to having Trove consume it. However, changing all the code so that 
 nothing else works is premature.
 
 All non mainstream resources like cloud provider specific can be implemented 
 as HEAT plugins (https://wiki.openstack.org/wiki/Heat/Plugins)
  
 
 Instead, let's start work to play well with Designate now, using the open 
 interface that has always existed. In the future after Designate enters 
 integration status we can then make the code closed and only support 
 Designate.
 
 Do we really need playing with Designate and then replace it? I expect 
 designate resource will come together with designate or even earlier.
 
 With best regards,
 Ilya Sviridov
 
 
 Denis also had some other comments about the DNS code, such as not passing 
 a single object as a parameter because it could be None. I think this is in 
 reference to passing around a DNS entry which gets formed by the DNS 
 instance entry factory. I see how someone might think this is brittle, but 
 in actuality it has worked for several years so if anything changing it 
 would introduce bugs. The interface was also written to use a full object 
 in order to be flexible; a full object should make it easier to work with 
 different types of DnsDriver implementations, as well as allowing more 
 options to be set from the DnsInstanceEntryFactory. This later class 
 creates a DnsEntry from an instance_id. It is possible that two deployments 
 of Trove, even if they are using Designate, might opt for different 
 DnsInstanceEntryFactory implementations in order to give the DNS entries 
 associated to databases different patterns. If the DNS entry is created at 
 this point its easier to further customize and tailor it. This will hold 
 true even when Designate is ready to become the only DNS option we support 
 (if such a thing is desirable).
 
 Thanks,
 
 Tim
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [trove] Configuration API BP

2013-10-01 Thread Vipul Sabhaya


 On Sep 26, 2013, at 8:49 AM, Michael Basnight mbasni...@gmail.com wrote:
 
 On Sep 25, 2013, at 7:16 PM, Craig Vyvial wrote:
 
 So we have a blueprint for this and there are a couple things to point out 
 that have changed since the inception of this BP.
 
 https://blueprints.launchpad.net/trove/+spec/configuration-management
 
 This is an overview of the API calls for 
 
 POST /configurations - create config
 GET  /configurations - list all configs
 PUT  /configurations/{id} - update all the parameters
 
 GET  /configurations/{id} - get details on a single config
 GET  /configurations/{id}/{key} - get single parameter value that was set 
 for the configuration
 
 PUT  /configurations/{id}/{key} - update/insert a single parameter
 DELETE  /configurations/{id}/{key} - delete a single parameter
 
 GET  /configurations/{id}/instances - list of instances the config is 
 assigned to
 GET  /configurations/parameters - list of all configuration parameters
 
 GET  /configurations/parameters/{key} - get details on a configuration 
 parameter
 
 There has been talk about using PATCH http action instead of PUT action for 
 thie update of individual parameter(s).
 
 PUT  /configurations/{id}/{key} - update/insert a single parameter
 and/or
 PATCH  /configurations/{id} - update/insert parameter(s)
 
 
 I am not sold on the idea of using PATCH unless its widely used in other 
 projects across Openstack. What does everyone think about this?
 
 If there are any concerns around this please let me know.
 
 Im a fan of PATCH. Id rather have a different verb on the same resource than 
 creating a new sub-resource just to do the job of what PATCH defines. Im not 
 sure the [1] gives us any value, and i think its only around because of [2]. 
 I can see PATCH removing the need for [1], simplifying the API. And of course 
 removing the need for [2] since it _is_ the updating of a single kv pair. And 
 i know keystone and glance use PATCH for updates in their API as well. 
 
 [1]  GET /configurations/{id}/{key} 
 [2] PUT  /configurations/{id}/{key} 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

So from this API, I see that a configuration is a standalone resource that 
could be applied to N number of instances.  It's not clear to me what the API 
is for 'applying' a configuration to an existing instance.  Also if we change a 
single item in a given configuration, does that change propagate to all 
instances that configuration belongs to? 

What about making 'configuration' a sub-resource of /instances?  

Unless we think configurations will be common amongst instances for a given 
tenant, it may not make sense to make them high level resources. 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev