Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-10 Thread Greg Hill
> I'm not sure how using pull requests instead of Gerrit changesets would
> help "core reviewers being pulled on to other projects"?
>

The 2 +2 requirement works for larger projects with a lot of contributors.
When you have only 3 regular contributors and 1 of them gets pulled on to a
project and can no longer actively contribute, you have 2 developers who
can +2 each other but nothing can get merged without that 3rd dev finding
time to add another +2. This is what happened with Taskflow a few years
back. Eventually the other 2 gave up and moved on also.


> Is this just about preferring not having a non-human gatekeeper like
> Gerrit+Zuul and being able to just have a couple people merge whatever
> they want to the master HEAD without needing to talk about +2/+W rights?
>

We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs
past muster before being merged, so it's not like we're wanting to
circumvent good contribution practices by committing whatever to HEAD. But
the +2/+W rights thing was a huge PITA to deal with with so few
contributors, for sure.

If it's just about preferring the pull request workflow versus the
> Gerrit rebase workflow, just say so. Same for just preferring the Github
> UI versus Gerrit's UI (which I agree is awful).
>

I mean, yes, I personally prefer the Github UI and workflow, but that was
not a primary consideration. I got used to using gerrit well enough. It was
mostly the  There's also a sense that if a project is in the Openstack
umbrella, it's not useful outside Openstack, and Taskflow is designed to be
a general purpose library. The hope is that just making it a regular open
source project might attract more users and contributors. This may or may
not bear out, but as it is, there's no real benefit to staying an openstack
project on this front since nobody is actively working on it within the
community.

Greg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-10 Thread Greg Hill
I've been out of the openstack loop for a few years, so I hope this reaches
the right folks.

Josh Harlow (original author of taskflow and related libraries) and I have
been discussing the option of moving taskflow out of the openstack umbrella
recently. This move would likely also include the futurist and automaton
libraries that are primarily used by taskflow. The idea would be to just
host them on github and use the regular Github features for Issues, PRs,
wiki, etc, in the hopes that this would spur more development. Taskflow
hasn't had any substantial contributions in several years and it doesn't
really seem that the current openstack devs have a vested interest in
moving it forward. I would like to move it forward, but I don't have an
interest in being bound by the openstack workflow (this is why the project
stagnated as core reviewers were pulled on to other projects and couldn't
keep up with the review backlog, so contributions ground to a halt).

I guess I'm putting it forward to the larger community. Does anyone have
any objections to us doing this? Are there any non-obvious technicalities
that might make such a transition difficult? Who would need to be made
aware so they could adjust their own workflows?

Or would it be preferable to just fork and rename the project so openstack
can continue to use the current taskflow version without worry of us
breaking features?

Greg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [taskflow] Begging for reviews

2016-02-02 Thread Greg Hill
Normally I reserve the begging for IRC, but since the other cores aren't always 
on, I'm taking the shotgun approach.  If you aren't a core on taskflow, then 
you can safely skip the rest of this email.

We have a number of open reviews with a single +2 that need another core 
reviewer to sign off.  Included in that is a blocker for the next release:

https://review.openstack.org/#/c/272748/

That was a bug introduced since the last release with trapping exception 
arguments from tasks that will affect anyone running the worker-based engine.  
The case I ran into was in requests where it did something akin to:

raise RetryException(ConnectionError())

That inner exception could not be converted to JSON so it threw an exception 
and aborted the job.

Also, these are important changes (IMO):

Make the worker stop processing queued tasks when it stops:
https://review.openstack.org/272862

Similarly for the conductor:
https://review.openstack.org/270319

Allow revert methods to have different method signatures from execute and just 
work:
https://review.openstack.org/270853

(depends on https://review.openstack.org/267131 )

Don't revert tasks that were never executed (I'm simplifying):
https://review.openstack.org/273731

And more.  I'm just highlighting the ones that most affect me personally, but 
in general, the review queue is really backlogged.  Any help is appreciated.  
Bring it!

https://review.openstack.org/#/q/status:open+project:openstack/taskflow+branch:master

Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [celery][taskflow] Reg. celery and task-flow

2016-01-08 Thread Greg Hill
I'm opinionated because I work with/on Taskflow and had mostly bad
experiences with Celery, but here's my $0.02.  It's possible that the
codebase I inherited just made bad use of Celery or things have improved a
lot in the last 18 months, but all I can speak from is my own experience.

Taskflow and Celery at a high level can be used for the same thing, but if
anything Taskflow would be considered a replacement for Celery, IMO.  We
migrated a codebase from Celery to Taskflow because Taskflow gives you a
lot more capabilities related to managing the whole process rather than
just a bunch of individual async tasks.  Some still swear by Celery
because they like that they can just throw some decorators on existing
methods and execute them asynchronously, and if that's all you need,
Taskflow is probably overkill.  For us, we wanted a lot more control over
the whole flow, including how to handle task failures, being able to run
many things in parallel and others in serial, etc. Just conceptually
thinking of the process as a series of tasks in a flow makes it a lot
easier to reason about what is going on than having to trace throw a bunch
of asynchronous functions. We also liked the resiliency guarantees that
Taskflow's job board and conductor/worker engine provided more than what
Celery had at the time (it didn't seem to guarantee that tasks were
executed, and we had fairly often where rabbitmq connections would time
out and our process would just not continue, no log, no error, just the
next task was never executed).

Greg

On 1/8/16, 3:19 PM, "Joshua Harlow"  wrote:

>So actually they are quite different, (although similar at some level),
>
>Given that celery isn't really a replacement for taskflow although one
>could say, from what I've heard from others, that taskflow is a
>super-set of what celery is so taskflow likely can replace parts of
>celery (but not vice-versa).
>
>Feel free to jump on #openstack-state-management IRC channel if u want
>to chat in person more about why (it gets into details that might just
>be easier to explain in person).
>
>ESWAR RAO wrote:
>> Hi All,
>>
>> Please let me know whether celery is replacement for taskflow.
>>
>> As per my understanding, task-flow can break jobs into tasks and execute
>> them.
>>
>>  From celery wiki, it also does almost similar behaviour.
>>
>> I guess in most of openstack components taskflow is widely used.
>> Any places where its being replaced with celery ??
>>
>> Celery: https://wiki.openstack.org/wiki/Celery
>> Distributed: https://wiki.openstack.org/wiki/DistributedTaskManagement
>> TaskFlow: https://wiki.openstack.org/wiki/TaskFlow
>>
>> Thanks
>> Eswar
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Adding support for HBase in Trove

2016-01-07 Thread Greg Hill
I don't work on Sahara, but I do work on a similar closed-source project.
FWIW, I agree with Kevin here.  standalone and pseudo-distributed HBase
are only intended for Hbase developers to test code without having to spin
up a cluster; it's not meant for operators or users to actually use as a
database. Hbase is designed to run on HDFS and relies on Zookeeper for
coordination as well. Unless trove is going to re-implement half of
Sahara, having it there makes no sense, and will ultimately only lead to
confusion among users who see Hbase and think they're getting something
useful when they are in fact not.

My $0.02

Greg

On 1/7/16, 12:19 PM, "Fox, Kevin M"  wrote:

>Oh. And I'd suggest having this conversation with the Sahara team. They
>may have some interesting insight into the issue.
>
>Thanks,
>Kevin
>
>From: Fox, Kevin M
>Sent: Thursday, January 07, 2016 9:44 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [trove] Adding support for HBase in Trove
>
>the whole hadoopish stack is unusual though. I suspect users often want
>to slice and dice all the components that run together on the cluster,
>where HBase is just one component of the shared cluster. I can totally
>envision users walking up to my door saying, I provisioned this HBase
>system with Trove, and now I want to run such and such job on the
>cluster... Building on top of Sahara enables that kind of thing. If trove
>wants to do the clustering all itself, then that's either out of the
>picture, or you end up having to add lots of sahara like functionality in
>the end to get its functionality back up to where users will want it.
>
>Thanks,
>Kevin
>
>From: michael mccune [m...@redhat.com]
>Sent: Thursday, January 07, 2016 8:17 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [trove] Adding support for HBase in Trove
>
>thanks for bringing this up Amrith,
>
>On 01/06/2016 07:31 PM, Fox, Kevin M wrote:
>> Having a simple plugin that doesn't depend on all of Sahara, for the
>>case a user only wants a single node HBase does make sense. Its much
>>easier for an Op to support that case if thats all their users ever
>>want. But, thats probably as far as that plugin ever should go. If you
>>need scale up/down, etc, then your starting to reimplement large swaths
>>of Sahara, and like the Cinder plugin for Nova, there could be a plugin
>>that works identically to the stand alone one that converts the same api
>>over to a Sahara compatible one. You then farm the work over to Sahara.
>
>i think this sounds reasonable, as long as we are limiting it to
>standalone mode. if the deployments start to take on a larger scope i
>agree it would be useful to leverage sahara for provisioning and scaling.
>
>as the hbase installation grows beyond the standalone mode there will
>necessarily need to be hdfs and zookeeper support to allow for a proper
>production deployment. this also brings up questions of allowing the
>end-users to supply configurations for the hdfs and zookeeper processes,
>not to mention enabling support for high availability hdfs.
>
>i can envision a scenario where trove could use sahara to provision and
>manage the clusters for hbase/hdfs/zk. this does pose some questions as
>we'd have to determine how the trove guest agent would be installed on
>the nodes, if there will need to be custom configurations used by trove,
>and if sahara will need to provide a plugin for bare (meaning no data
>processing framework) hbase/hdfs/zk clusters. but, i think these could
>be solved by either using custom images or a plugin in sahara that would
>install the necessary agents/configurations.
>
>of course, this does add a layer of complexity as operators who wish
>this type of deployment will need to have both trove and sahara, but imo
>this would be easier than replicating the work that sahara has done with
>these technologies.
>
>regards,
>mike
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for 

[openstack-dev] why do we put a license in every file?

2014-02-05 Thread Greg Hill
I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
that we have to put the same license into every single file of source code in 
our projects.  In my past experience, a single LICENSE file at the root-level 
of the project has been sufficient to declare the license chosen for a project. 
 Github even has the capacity to choose a license and generate that file for 
you, it's neat. 

Greg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] scheduled tasks redux

2014-01-29 Thread Greg Hill

On Jan 23, 2014, at 3:41 PM, Michael Basnight mbasni...@gmail.com wrote:

 
 Will we be doing more complex things than every day at some time? ie, does 
 the user base see value in configuring backups every 12th day of every other 
 month? I think this is easy to write the schedule code, but i fear that it 
 will be hard to build a smarter scheduler that would only allow X tasks in a 
 given hour for a window. If we limit to daily at X time, it seems easier to 
 estimate how a given window for backup will look for now and into the future 
 given a constant user base :P Plz note, I think its viable to schedule more 
 than 1 per day, in cron * 0,12 or * */12.

 
 Will we be using this as a single task service as well? So if we assume the 
 first paragraph is true, that tasks are scheduled daily, single task services 
 would be scheduled once, and could use the same crontab fields. But at this 
 point, we only really care about the minute, hour, and _frequency_, which is 
 daily or once. Feel free to add 12 scheduled tasks for every 2 hours if you 
 want to back it up that often, or a single task as * 0/2. From the backend, i 
 see that as 12 tasks created, one for each 2 hours.

I hadn't really considered anything but repeated use, so that's a good point.  
I'll have to think on that more.  I do think that the frequency won't only be 
daily or once.  It's not uncommon to have weekly or monthly maintenance 
tasks, which it was my understanding was something we wanted to cover with this 
spec.   I'll do some research to see if there is a suitable standard format 
besides cron that works well for both repeated and scheduled singular tasks.

 But this doesnt take into mind windows, when you say you want a cron style 
 2pm backup, thats really just during some available window. Would it make 
 more sense for an operator to configure a time window, and then let users 
 choose a slot within a time window (and say there are a finite number of 
 slots in a time window). The slotting would be done behind the scenes and a 
 user would only be able to select a window, and if the slots are all taken, 
 it wont be shown in the get available time windows. the available time 
 windows could be smart, in that, your avail time window _could be_ based on 
 the location of the hardware your vm is sitting on (or some other rule…). 
 Think network saturation if everyone on host A is doing a backup to swift.

I don't think having windows will solve as much as we hope it will, and it's a 
tricky problem to get right as the number of tasks that can run per window is 
highly variable.  I'll have to gather my thoughts on this more and post another 
message when I've got something more to say than my gut says this doesn't feel 
right.

Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] how to list available configuration parameters for datastores

2014-01-23 Thread Greg Hill
To be more consistent with other APIs in trove, perhaps:

/datastores/datastore/parameters
/datastores/datastore/parameters/parameters

Greg

On Jan 22, 2014, at 4:52 PM, Kaleb Pomeroy 
kaleb.pome...@rackspace.commailto:kaleb.pome...@rackspace.com wrote:

I think that may have been a slight oversite. We will likely have the following 
two routes

/datastores/datastore/configuration/ would be the collection of all parameters
/datastores/datastore/configuration/:parameter would be an individual setting.

- kpom


From: Craig Vyvial [cp16...@gmail.commailto:cp16...@gmail.com]
Sent: Wednesday, January 22, 2014 4:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Trove] how to list available configuration 
parameters for datastores

Ok with overwhelming support for #3.
What if we modified #3 slightly because looking at it again seems like we could 
shorten the path since /datastores/datastore/configuration doesnt do anything.

instead of
#1
/datastores/datastore/configuration/parameters

maybe:
#2
/datastores/datastore/parameters

#3
/datastores/datastore/configurationparameters




On Wed, Jan 22, 2014 at 2:27 PM, Denis Makogon 
dmako...@mirantis.commailto:dmako...@mirantis.com wrote:
Goodday to all.

#3 looks more than acceptable.
/datastores/datastore/configuration/parameters.
According to configuration parameters design, a configuration set must be 
associated to exactly one datastore.

Best regards, Denis Makogon.


2014/1/22 Michael Basnight mbasni...@gmail.commailto:mbasni...@gmail.com
On Jan 22, 2014, at 10:19 AM, Kaleb Pomeroy wrote:

 My thoughts so far:

 /datastores/datastore/configuration/parameters (Option Three)
 + configuration set without an associated datastore is meaningless
 + a configuration set must be associated to exactly one datastore
 + each datastore must have 0-1 configuration set
 + All above relationships are immediately apparent
 - Listing all configuration sets becomes more difficult (which I don't think 
 that is a valid concern)

+1 to option 3, given what kaleb and craig have outlined so far. I dont see the 
above minus as a valid concern either, kaleb.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] scheduled tasks redux

2014-01-23 Thread Greg Hill
The blueprint is here:

https://wiki.openstack.org/wiki/Trove/scheduled-tasks

I've been working on the REST API portion of this project, and as I was working 
on the client, a part of it didn't sit quite right.  As it is specified, it 
calls for two fields to define when and how often to run the task:

frequency : hourly|daily|weekly|monthly,
time_window:2012-03-28T22:00Z/2012-03-28T23:00Z,

The concept of combining two datetimes into a single field feels awkward when 
using the API from a client perspective.  I originally thought I'd just split 
it into two date times, but that still felt wrong.  We did some internal 
discussion here at Rackspace, and it was brought up that the date doesn't 
actually matter in this scenario.  All we care about is what time to run the 
task and how frequently to repeat it.  Apparently some of the original 
discussion was more around a crontab style entry, but with a time window rather 
than a fixed time, but that didn't get put into the spec.  For those who might 
be wondering, the purpose of the window rather than a fixed time is to give 
some leeway to the system to not overload things when everyone wants to fire 
off a backup at midnight.  There could be a configurable minimum window size 
that defaulted to 2 hours, so by default we'd only guarantee a task was run 
within a two hour window, which could be adjusted up or down by operators.

So I have basically two questions:

1. Does anyone see a problem with defining the repeating options as a single 
field rather than multiple fields?
2. Should we use the crontab format for this or is that too terse?  We could go 
with a more fluid style like Every Wednesday/Friday/Sunday at 12:00PM but 
that's English-centric and much more difficult to parse programmatically.  I'd 
welcome alternate suggestions.

Thanks in advance.

Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] centralized notifications?

2014-01-14 Thread Greg Hill
Is there any project or proposed project for centralizing notifications from 
openstack services to alert tenants when things go wrong (or right)?  Say, for 
example, a nova instance failed to finish the build process, and the customer 
wants an email alert when that happens, or a trove database fails a backup and 
they want an SMS sent to some cell number when that happens.  

Greg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] centralized notifications?

2014-01-14 Thread Greg Hill
Thanks.  Is there any more detail about what that is going to look like and how 
far along it might be?

Greg

On Jan 14, 2014, at 9:13 AM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jan 14 2014, Greg Hill wrote:
 
 Is there any project or proposed project for centralizing notifications from
 openstack services to alert tenants when things go wrong (or right)? Say,
 for example, a nova instance failed to finish the build process, and the
 customer wants an email alert when that happens, or a trove database fails a
 backup and they want an SMS sent to some cell number when that happens.
 
 That should be covered by:
 
  https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification
 
 -- 
 Julien Danjou
 ;; Free Software hacker ; independent consultant
 ;; http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-13 Thread Greg Hill
I'm not really an active nova contributor as of yet, but I'll +1 this if nova's 
XML support is anything like what I see in trove (which I believe just cloned 
how nova did it in the first place).  XML without a schema is terrible for a 
serialization format.  In my experience, the only people who actually use XML 
really want SOAP or XMLRPC (mostly .net and Java developers), which both give 
mechanisms for defining the schema of the request/response so data types like 
arrays and dates and booleans are sane to work with.  Doing XML in a generic 
fashion never adequately deals with those problems and is then rarely, if ever, 
actually used.

Greg

On Jan 13, 2014, at 8:38 AM, Sean Dague s...@dague.net wrote:

 I know we've been here before, but I want to raise this again while there is 
 still time left in icehouse.
 
 I would like to propose that the Nova v3 API removes the XML payload 
 entirely. It adds complexity to the Nova code, and it requires duplicating 
 all our test resources, because we need to do everything onces for JSON and 
 once for XML. Even worse, the dual payload strategy that nova employed leaked 
 out to a lot of other projects, so they now think maintaining 2 payloads is a 
 good thing (which I would argue it is not).
 
 As we started talking about reducing tempest concurrency in the gate, I was 
 starting to think a lot about what we could shed that would let us keep up a 
 high level of testing, but bring our overall time back down. The fact that 
 Nova provides an extremely wide testing surface makes this challenging.
 
 I think it would be a much better situation if the Nova API is a single 
 payload type. The work on the jsonschema validation is also something where I 
 think we could get to a fully discoverable API, which would be huge.
 
 If we never ship v3 API with XML as stable, we can deprecate it entirely, and 
 let it die with v2 ( probably a year out ).
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Greg Hill
If you're just using it for client-side templates, you should be able to treat 
it like any other js library (jquery, etc) without using npm (node's package 
manager) for installation.  Handlebars, for example, has a single downloadable 
js file that is available on their website:

http://builds.handlebarsjs.com.s3.amazonaws.com/handlebars-v1.3.0.js

I'm coming in to the conversation without a lot of context, though, so I might 
be missing some reason why that won't work.

As for the incremental approach to using one of the larger frameworks, 
templates definitely do seem like an incremental improvement that won't really 
hinder adoption of the larger framework, since most of them are pluggable to 
work with most of the major template engines last I checked.  

Greg

On Jan 13, 2014, at 7:05 AM, Sean Dague s...@dague.net wrote:

 On 01/12/2014 09:56 PM, Michael Krotscheck wrote:
 If all you're looking for is a javascript-based in-browser templating
 system, then handlebars is a fine choice. I'm not certain on how complex
 status.html/status.js is, however if you expect it to grow to something
 more like an application then perhaps looking at angular as a full
 application framework might help you avoid both this growing pain and
 future ones (alternatives: Ember, backbone, etc).
 
 Honestly, I've not done enough large scale js projects to know whether we'd 
 consider status.js to be big or not. I just know it's definitely getting too 
 big for += all the html together and doing document.writes.
 
 I guess the real question I had is is there an incremental path towards any 
 of the other frameworks? I can see how to incrementally bring in templates, 
 but again my personal lack of experience on these others means I don't know.
 
 Quick warning though, a lot of the javascript community out there uses
 tooling that is built on top of Node.js, for which current official
 packages for Centos/Ubuntu don't exist, and therefore infra won't
 support it for openstack. Storyboard is able to get around this because
 it's not actually part of openstack proper, but you might be forced to
 manage your code manually. That's not a deal breaker in my opinion -
 it's just more tedious (though I think it might be less tedious than
 what you're doing right now).
 
 I'd ideally like to be able to function without node, mostly because it's 
 another development environment to have to manager. But I realize that's 
 pushing against the current at this point. So I agree, not a deal breaker.
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Greg Hill
Trove doesn't use ssh afaik.  It has an agent that runs in the guest that is 
communicated with via our normal RPC messaging options.

Greg

On Jan 13, 2014, at 11:10 AM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:




On Mon, Jan 13, 2014 at 11:34 AM, Bhuvan Arumugam 
bhu...@apache.orgmailto:bhu...@apache.org wrote:

On Mon, Jan 13, 2014 at 7:02 AM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:



On Mon, Jan 13, 2014 at 7:32 AM, Bhuvan Arumugam 
bhu...@apache.orgmailto:bhu...@apache.org wrote:
On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick 
sskripn...@mirantis.commailto:sskripn...@mirantis.com wrote:

I appreciate that we want to fix the ssh client. I'm not certain that writing 
our own is the best answer.

I was supposed to fix oslo.processutils.ssh with this class, but it may
be fixed without it, not big deal.




In his comments on your pull request, the paramiko author recommended looking 
at Fabric. I know that Fabric has a long history in production. Does it 
provide the required features?


Fabric is too much for just command execution on remote server. Spur seems like
good choice for this.

I'd go with Fabric. It support several remote server operations, file 
upload/download among them. We could just import the methods we are interested. 
It in turn use paramiko supporting most of ssh client options. If we begin 
using fabric for file upload/download, it'll open door for more remote server 
operations. Bringing in fabric as part of oslo will be cool.

Where are we doing those sorts of operations?

Currently, we don't upload/download files to remote server through ssh/scp. We 
do execute commands, pipe multiple commands in few tempest when ssh is enabled. 
With oslo/fabric, we may develop a common ground to deal with remote servers, 
be it executing commands or dealing with files.

Are we using ssh to run commands anywhere else in OpenStack? Maybe in one of 
the orchestration layers like heat or trove?

Doug



 --
Regards,
Bhuvan Arumugam
www.livecipher.comhttp://www.livecipher.com/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] coding standards question

2014-01-07 Thread Greg Hill
I got a -1 on a review for a standards violation that isn't caught by the 
automated checks, so I was wondering why the automated check doesn't catch it.  
The violation was:

from X import Y, Z

According to the coding standards page on the openstack wiki, the coding 
standards are PEP8 (they just link to the PEP8 docs): 
https://wiki.openstack.org/wiki/CodingStandards and PEP8 explicitly says this 
format is allowed.

It was pointed out that there's an additional wiki page I had missed, 
http://docs.openstack.org/developer/hacking/ which specifies this rule.  So now 
that I see it is a rule, it comes back to my original question, why is it not 
enforced by the checker?  Apparently there's not a flake8 rule for this either 
https://flake8.readthedocs.org/en/2.0/warnings.html

So, two questions:

1. Is this really the rule or just a vaguely worded repeat of the PEP8 rule 
about import X, Y?
2. If it is the rule, what's involved in getting the pep8 tests to check for it?

My own personal frustration aside, this would be helpful for other newcomers I 
imagine.  We have some pretty rigid and extensive coding standards, so its not 
reasonable to expect new contributors to remember them all.  It's also much 
nicer to have an automated tool tell you you violated some coding standard than 
to think you were ok and then have your code rejected 2 weeks later because of 
it.

Thanks,
Greg

P.S. I can fix the wiki to point to the right page after the discussion.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] coding standards question

2014-01-07 Thread Greg Hill
Thanks Sean.  I'll work on adding this one to the hacking repo.  That brings up 
another question, though, what are the implications of suddenly enforcing a 
rule that wasn't previously enforced?  I know there are at least 30 other 
violations of this rule just within trove, and I imagine larger projects 
probably have more.  I'd hate to be the target of all the ire that sudden 
rejections of every commit would cause.  Do we have a way to make it off by 
default for some period to let the projects all clean themselves up then turn 
it on by default after that?

Or we could just loosen the coding standards, but that's just crazy talk :D

Greg

On Jan 7, 2014, at 8:46 AM, Sean Dague s...@dague.net wrote:

 On 01/07/2014 09:26 AM, Greg Hill wrote:
 I got a -1 on a review for a standards violation that isn't caught by
 the automated checks, so I was wondering why the automated check doesn't
 catch it.  The violation was:
 
 from X import Y, Z
 
 According to the coding standards page on the openstack wiki, the coding
 standards are PEP8 (they just link to the PEP8 docs):
 https://wiki.openstack.org/wiki/CodingStandards and PEP8 explicitly says
 this format is allowed.
 
 It was pointed out that there's an additional wiki page I had missed,
 http://docs.openstack.org/developer/hacking/ which specifies this rule.
  So now that I see it is a rule, it comes back to my original question,
 why is it not enforced by the checker?  Apparently there's not a flake8
 rule for this either https://flake8.readthedocs.org/en/2.0/warnings.html
 
 So, two questions:
 
 1. Is this really the rule or just a vaguely worded repeat of the PEP8
 rule about import X, Y?
 2. If it is the rule, what's involved in getting the pep8 tests to check
 for it?
 
 Writing the hacking test to support it - 
 https://github.com/openstack-dev/hacking
 
 The policy leads the automatic enforcement scripts, so there are plenty of 
 rules in the policy that are not in automatic enforcement. Contributions to 
 fill in the gaps are welcomed.
 
 My own personal frustration aside, this would be helpful for other
 newcomers I imagine.  We have some pretty rigid and extensive coding
 standards, so its not reasonable to expect new contributors to remember
 them all.  It's also much nicer to have an automated tool tell you you
 violated some coding standard than to think you were ok and then have
 your code rejected 2 weeks later because of it.
 
 Thanks,
 Greg
 
 P.S. I can fix the wiki to point to the right page after the discussion.
 
 Agreed, it's all about bandwidth. Contributors on hacking to help fill it out 
 are appreciated. Right now it's mostly just Joe with a few others throwing in 
 when they can.
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] coding standards question

2014-01-07 Thread Greg Hill
So it turns out that trove just has this rule disabled.  At least I now know 
more about how this stuff works, I guess.  Sorry for the confusion.

Greg

On Jan 7, 2014, at 9:54 AM, Sean Dague s...@dague.net wrote:

 On 01/07/2014 10:19 AM, Greg Hill wrote:
 Thanks Sean.  I'll work on adding this one to the hacking repo.  That brings 
 up another question, though, what are the implications of suddenly enforcing 
 a rule that wasn't previously enforced?  I know there are at least 30 other 
 violations of this rule just within trove, and I imagine larger projects 
 probably have more.  I'd hate to be the target of all the ire that sudden 
 rejections of every commit would cause.  Do we have a way to make it off by 
 default for some period to let the projects all clean themselves up then 
 turn it on by default after that?
 
 New rules only get released as part of new semver bumps on hacking, and
 all the projects are pinned on their upper bound on hacking. i.e.
 
 hacking=0.8.0,0.9
 
 So new rules would be going into the 0.9.x release stream at this point.
 Once 0.9.0 is released, we'll up the global requirements. Then projects
 should update their pins, and either address the issues, or add an
 ignore for the rules they do not want to enforce (either by policy, or
 because now is not a good time to fix them).
 
 So it is minimally disruptive.
 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][mistral] scheduled tasks

2013-12-31 Thread Greg Hill
I guess this isn't a new discussion.  I did some more digging and apparently 
this is what came out of the last discussion:

https://wiki.openstack.org/wiki/EventScheduler

That definitely seems like it would be something simple we could use, since it 
only provides scheduling and that's all we need, but it doesn't appear that 
it's had any traction since that time.  I guess that got absorbed along with 
Convection into Mistral (is that right?).

I tend to agree with the sentiment that the scheduling component should live 
outside the workflow service, especially for use cases like this where we just 
need scheduling and not the workflow portions as we're just scheduling things 
that are already achievable via single API calls (to my knowledge).

It seems like we basically have 4 options at this point:

1. Wait for/help finish the scheduling component of mistral and use that
2. Build Qonos workers to do the trove needful and integrate with that
3. Build this proposed EventScheduler thing and have trove use that
4. Build something simple internal to trove for now and revisit when things 
have matured more

Despite my initial enthusiasm about Qonos after it was mentioned earlier today, 
the more I look into it, the more it seems like the wrong fit.  It could 
definitely do it, but the way it's structured, it appears that we'd have to add 
code to qonos/worker for each action we wanted to schedule, which just seems 
like a pain for what amounts to make this API call to trove.

My gut says working on EventScheduler is probably the best/most ideal option, 
but time constraints and what-not make build it into trove the most likely 
course of action for now.

Greg

On Dec 31, 2013, at 10:06 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

Agreed taskflow doesn't currently provide scheduling as it was thought that 
reliable execution that can be restarted and resumed is the foundation that 
someone using taskflow can easily provide scheduling ontop of... Better IMHO to 
have a project doing this foundation well (since openstack would benefit from 
this) than try to pack so many features in that it does none of them well (this 
kind of kitchen sink approach seems to happen more often than not, sadly).

But in reality it's all about compromises and finding the solution that makes 
sense and works, and happy new year!! :P

Sent from my really tiny device...

On Dec 30, 2013, at 9:03 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

Greg,

Georgy is right. We’re now actively working on PoC and we’ve already 
implemented the functionality we initially planned, including cron-based 
scheduling. You can take a look at our repo and evaluate what we’ve done, we’d 
be very glad to hear some feedback from anyone potentially interested in 
Mistral. We were supposed to deliver PoC in the end of December, however, we 
decided not to rush and include several really cool things that we came up with 
while working on PoC, they should demonstrate the whole idea of Mistral much 
better and expose functionality for more potential use cases. A couple of days 
ago I sent out the information about additional changes in DSL that we want to 
implement (etherpad: https://etherpad.openstack.org/p/mistral-poc), so if you’d 
like please join the discussion and let us know how we can evolve the project 
to better fit your needs. In fact, even though we call it PoC it’s already in a 
good shape and pretty soon (~1.5 month) is going to be mature enough to use it 
as a dependency for other projects.

As far as security, we thought about this and and we have a vision of how it 
could be implemented. Generally, later on we’re planning to implement sort of 
Role Based Access Control (RBAC) to, first of all, isolate user workbooks 
(definition of tasks, actions, events) from each other and deal with access 
patterns to OpenStack services. We would encourage you to file a BP with a 
description of what would be needed by Trove in that regard.

I looked at https://wiki.openstack.org/wiki/Trove/scheduled-tasks and at the 
first glance Mistral looks a good fit here, especially if you’re interested in 
a standalone REST service with its capabilities like execution monitoring, 
history, language independence and HA (i.e. you schedule backups via Mistral 
and Trove itself shouldn’t care about availability of any functionality related 
to scheduling). TaskFlow may also be helpful in case your scheduled jobs are 
representable as flows using one of TaskFlow patterns. However, in my 
understanding you’ll have to implement scheduling yourself since TaskFlow does 
not support it now, at least I didn’t find anything like that (Joshua can 
provide more details on that).

Thanks.

Renat Akhmerov
@Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

2013-12-30 Thread Greg Hill
+1

On Dec 27, 2013, at 4:48 PM, Michael Basnight 
mbasni...@gmail.commailto:mbasni...@gmail.com wrote:

Howdy,

Im proposing Auston McReynolds (amcrn) to trove-core.

Auston has been working with trove for a while now. He is a great reviewer. He 
is incredibly thorough, and has caught more than one critical error with 
reviews and helps connect large features that may overlap (config edits + multi 
datastores comes to mind). The code he submits is top notch, and we frequently 
ask for his opinion on architecture / feature / design.

https://review.openstack.org/#/dashboard/8214
https://review.openstack.org/#/q/owner:8214,n,z
https://review.openstack.org/#/q/reviewer:8214,n,z

Please respond with +1/-1, or any further comments.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][mistral] scheduled tasks

2013-12-30 Thread Greg Hill
I accidentally sent this reply to Josh directly.

Greg

On Dec 30, 2013, at 12:17 PM, Greg Hill 
greg.h...@rackspace.commailto:greg.h...@rackspace.com wrote:

Taskflow seems like it would be a good fit for implementation or 
re-implementation of some of the tasks we hope to automate, but the first set 
of desired tasks are already built.  We're simply building the scheduling logic 
to automate them, so I don't see what taskflow would buy us there.

Greg

On Dec 30, 2013, at 12:14 PM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
 wrote:

Any reason to not use taskflow (https://wiki.openstack.org/wiki/TaskFlow) to 
help u here??

I think it could be easily adapted to do what u want, and would save u from 
having to recreate the same task execution logic that everyone seems to rebuild…

-Josh

From: Greg Hill greg.h...@rackspace.commailto:greg.h...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, December 30, 2013 at 9:59 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove][mistral] scheduled tasks

I've begun working on the scheduled tasks feature that will allow automated 
backups (and other things) in trove.

Here's the blueprint:
https://wiki.openstack.org/wiki/Trove/scheduled-tasks

I've heard some mention that mistral might be an option rather than building 
something into trove.  I did some research and it seems like it *might* be a 
good fit, but it also seems like a bit of overkill for something that could be 
built in to trove itself pretty easily.  There's also the security concern of 
having to give mistral access to the trove management API in order to allow it 
to fire off backups and other tasks on behalf of users, but maybe that's just 
my personal paranoia and it's really not much of a concern.

My current plan is to not use mistral, at least for the original 
implementation, because it's not yet ready and we have a fairly urgent need for 
the functionality.  We could make it an optional feature later for people who 
are running mistral and want to use it for this purpose.

I'd appreciate any and all feedback before I get too far along.

Greg


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] datastore migration issues

2013-12-19 Thread Greg Hill
We did consider doing that, but decided it wasn't really any different from the 
other options as it required the deployer to know to alter that data.  That 
would require the fewest code changes, though.  It was also my understanding 
that mysql variants were a possibility as well (percona and mariadb), which is 
what brought on the objection to just defaulting in code.  Also, we can't 
derive the version being used, so we *could* fill it with a dummy version and 
assume mysql, but I don't feel like that solves the problem or the objections 
to the earlier solutions.  And then we also have bogus data in the database.

Since there's no perfect solution, I'm really just hoping to gather consensus 
among people who are running existing trove installations and have yet to 
upgrade to the newer code about what would be easiest for them.  My 
understanding is that list is basically HP and Rackspace, and maybe Ebay?, but 
the hope was that bringing the issue up on the list might confirm or refute 
that assumption and drive the conversation to a suitable workaround for those 
affected, which hopefully isn't that many organizations at this point.

The options are basically:

1. Put the onus on the deployer to correct existing records in the database.
2. Have the migration script put dummy data in the database which you have to 
correct.
3. Put the onus on the deployer to fill out values in the config value

Greg

On Dec 18, 2013, at 8:46 PM, Robert Myers 
myer0...@gmail.commailto:myer0...@gmail.com wrote:


There is the database migration for datastores. We should add a function to  
back fill the existing data with either a dummy data or set it to 'mysql' as 
that was the only possibility before data stores.

On Dec 18, 2013 3:23 PM, Greg Hill 
greg.h...@rackspace.commailto:greg.h...@rackspace.com wrote:
I've been working on fixing a bug related to migrating existing installations 
to the new datastore code:

https://bugs.launchpad.net/trove/+bug/1259642

The basic gist is that existing instances won't have any data in the 
datastore_version_id field in the database unless we somehow populate that data 
during migration, and not having that data populated breaks a lot of things 
(including the ability to list instances or delete or resize old instances).  
It's impossible to populate that data in an automatic, generic way, since it's 
highly vendor-dependent on what database and version they currently support, 
and there's not enough data in the older schema to populate the new tables 
automatically.

So far, we've come up with some non-optimal solutions:

1. The first iteration was to assume 'mysql' as the database manager on 
instances without a datastore set.
2. The next iteration was to make the default value be configurable in 
trove.conf, but default to 'mysql' if it wasn't set.
3. It was then proposed that we could just use the 'default_datastore' value 
from the config, which may or may not be set by the operator.

My problem with any of these approaches beyond the first is that requiring 
people to populate config values in order to successfully migrate to the newer 
code is really no different than requiring them to populate the new database 
tables with appropriate data and updating the existing instances with the 
appropriate values.  Either way, it's now highly dependent on people deploying 
the upgrade to know about this change and react accordingly.

Does anyone have a better solution that we aren't considering?  Is this even 
worth the effort given that trove has so few current deployments that we can 
just make sure everyone is populating the new tables as part of their upgrade 
path and not bother fixing the code to deal with the legacy data?

Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] datastore migration issues

2013-12-18 Thread Greg Hill
I've been working on fixing a bug related to migrating existing installations 
to the new datastore code:

https://bugs.launchpad.net/trove/+bug/1259642

The basic gist is that existing instances won't have any data in the 
datastore_version_id field in the database unless we somehow populate that data 
during migration, and not having that data populated breaks a lot of things 
(including the ability to list instances or delete or resize old instances).  
It's impossible to populate that data in an automatic, generic way, since it's 
highly vendor-dependent on what database and version they currently support, 
and there's not enough data in the older schema to populate the new tables 
automatically.

So far, we've come up with some non-optimal solutions:

1. The first iteration was to assume 'mysql' as the database manager on 
instances without a datastore set.
2. The next iteration was to make the default value be configurable in 
trove.conf, but default to 'mysql' if it wasn't set.
3. It was then proposed that we could just use the 'default_datastore' value 
from the config, which may or may not be set by the operator.

My problem with any of these approaches beyond the first is that requiring 
people to populate config values in order to successfully migrate to the newer 
code is really no different than requiring them to populate the new database 
tables with appropriate data and updating the existing instances with the 
appropriate values.  Either way, it's now highly dependent on people deploying 
the upgrade to know about this change and react accordingly.

Does anyone have a better solution that we aren't considering?  Is this even 
worth the effort given that trove has so few current deployments that we can 
just make sure everyone is populating the new tables as part of their upgrade 
path and not bother fixing the code to deal with the legacy data?

Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev