Re: [openstack-dev] [TC][Keystone] Rehashing the Pecan/Falcon/other WSGI debate

2015-05-04 Thread Kurt Griffiths
Hi all, 

To be clear, both Pecan and Falcon are actively maintained and have
healthy communities. In any case, I tend to point OpenStack projects
toward Pecan as the default choice, since that lets you take advantage of
all the benefits standardization has to offer. Of course, you need to
quantify and balance those benefits against Keystone’s specific
requirements, but Pecan should be able to do what you need.

-Kurt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] [zaqar] Need some help designing push notifications

2014-10-30 Thread Kurt Griffiths
Hi everyone, 

In the past we’ve discussed how Zaqar could be leveraged by Horizon to get
status notifications and such without having to poll backend APIs. In this
way, the user experience could be improved while also removing some load
from those APIs.

During Kilo the team is finally going to get to work on supporting this
use case (yay!). We would love it if some folks from the Horizon team
could join us to plan the details in our first design session on Tuesday
at 11:15:

http://kilodesignsummit.sched.org/event/c41d0532de6c2fafbe32ac559f85a24a

Hope to see you there!

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-17 Thread Kurt Griffiths
Great question. So, some use cases, like guest agent, would like to see 
something around ~20ms if the agent is needing to respond to requests from a 
control surface/panel while a user clicks around. I spoke with a social media 
company who was also interested in low latency just because they have a big 
volume of messages they need to slog through in a timely manner or they will 
get behind (long-polling or websocket support was something they would like to 
see).

Other use cases should be fine with, say, 100ms. I want to say Heat’s needs 
probably fall into that latter category, but I’m only speculating.

Some other feedback we got a while back was that people would like a knob to 
tweak queue attributes. E.g., the tradeoff between durability and performance. 
That led to work on queue “flavors”, which Flavio has been working on this past 
cycle, so I’ll let him chime in on that.

From: Joe Gordon mailto:joe.gord...@gmail.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 17, 2014 at 2:32 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

Can you further quantify what you would consider too slow, is it 100ms too slow.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 3)

2014-09-16 Thread Kurt Griffiths
All results have been posted for the 2x web head + 2x redis tests (C3). I
also rearranged the wiki page to make it easier to compare the impact of
adding more capacity on the backend.

In C3, The load generator’s CPUs were fully saturated, while there was
still plenty of headroom on the web heads (~60% remaining CPU capacity)
and redis server instances (50% remaining across 2 procs), based on load
average. Therefore, I think this setup could actually handle a lot more
load than I attempted, and that would be a good followup test.

Keeping in mind that I haven’t spent much time tuning the systems and the
Redis driver still has several opportunities for further optimization, I
think these results so far are a pretty good sign we are on the right
track. I was happy to see that all requests were succeeding and I didn’t
find any race conditions during the tests, so kudos to the team for their
careful coding and reviews!

On 9/16/14, 2:43 PM, "Kurt Griffiths"  wrote:

>Results are now posted for all workloads for 2x web heads and 1x Redis
>proc (Configuration 2). Stats are also available for the write-heavy
>workload with 2x webheads and 2x redis procs (Configuration 3). The latter
>results look promising, and I suspect the setup could easily handle a
>significantly higher load. Multiple load generator boxes would need to be
>used (noted on the etherpad).
>
>On 9/16/14, 10:23 AM, "Kurt Griffiths" 
>wrote:
>
>>Hi crew, as promised I’ve continued to work through the performance test
>>plan. I’ve started a wiki page for the next batch of tests and results:
>>
>>https://wiki.openstack.org/wiki/Zaqar/Performance/PubSub/Redis
>>
>>I am currently running the same tests again with 2x web heads, and will
>>update the wiki page with them when they finish (it takes a couple hours
>>to run each batch of tests). Then, I plan to add an additional Redis proc
>>and run everything again.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 3)

2014-09-16 Thread Kurt Griffiths
Results are now posted for all workloads for 2x web heads and 1x Redis
proc (Configuration 2). Stats are also available for the write-heavy
workload with 2x webheads and 2x redis procs (Configuration 3). The latter
results look promising, and I suspect the setup could easily handle a
significantly higher load. Multiple load generator boxes would need to be
used (noted on the etherpad).

On 9/16/14, 10:23 AM, "Kurt Griffiths" 
wrote:

>Hi crew, as promised I’ve continued to work through the performance test
>plan. I’ve started a wiki page for the next batch of tests and results:
>
>https://wiki.openstack.org/wiki/Zaqar/Performance/PubSub/Redis
>
>I am currently running the same tests again with 2x web heads, and will
>update the wiki page with them when they finish (it takes a couple hours
>to run each batch of tests). Then, I plan to add an additional Redis proc
>and run everything again.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 3)

2014-09-16 Thread Kurt Griffiths
Thanks for the reminder! I’ll make note of that. In these tests the
clients are hitting Nginx (which is acting as a load balancer) so I could
try disabling keep-alive there and seeing what happens. So far I just used
the default that was written into the conf when the package was installed
("keepalive_timeout 65;”).

FWIW, I started an etherpad to track a shortlist of ideas:

https://etherpad.openstack.org/p/zaqar-performance-testing


On 9/16/14, 11:58 AM, "Jay Pipes"  wrote:

>On 09/16/2014 11:23 AM, Kurt Griffiths wrote:
>> Hi crew, as promised I’ve continued to work through the performance test
>> plan. I’ve started a wiki page for the next batch of tests and results:
>>
>> https://wiki.openstack.org/wiki/Zaqar/Performance/PubSub/Redis
>>
>> I am currently running the same tests again with 2x web heads, and will
>> update the wiki page with them when they finish (it takes a couple hours
>> to run each batch of tests). Then, I plan to add an additional Redis
>>proc
>> and run everything again. After that, there are a few other things that
>>we
>> could do, depending on what everyone wants to see next.
>>
>> * Run all these tests for producer-consumer (task distribution)
>>workloads
>> * Tune Redis, uWSGI and see if we can't improves latency, stdev, etc.
>> * Do a few runs with varying message sizes
>> * Continue increasing load and adding additional web heads
>> * Continue increasing load and adding additional redis procs
>> * Vary number of queues
>> * Vary number of project-ids
>> * Vary message batch sizes on post/get/claim
>
>Don't forget my request to identify the effect of keepalive settings in
>uWSGI :)
>
>Thanks!
>-jay
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] Juno Performance Testing (Round 3)

2014-09-16 Thread Kurt Griffiths
Hi crew, as promised I’ve continued to work through the performance test
plan. I’ve started a wiki page for the next batch of tests and results:

https://wiki.openstack.org/wiki/Zaqar/Performance/PubSub/Redis

I am currently running the same tests again with 2x web heads, and will
update the wiki page with them when they finish (it takes a couple hours
to run each batch of tests). Then, I plan to add an additional Redis proc
and run everything again. After that, there are a few other things that we
could do, depending on what everyone wants to see next.

* Run all these tests for producer-consumer (task distribution) workloads
* Tune Redis, uWSGI and see if we can't improves latency, stdev, etc.
* Do a few runs with varying message sizes
* Continue increasing load and adding additional web heads
* Continue increasing load and adding additional redis procs
* Vary number of queues
* Vary number of project-ids
* Vary message batch sizes on post/get/claim

P.S. As mentioned in yesterday’s team meeting, we are starting to work
on integration with Rally as our long-term performance testing solution. I
encourage everyone to get involved and contribute to that project; your
contributions benefit not only Zaqar, but other OS projects as well who
are starting to use Rally to keep an eye on their own services’
performance. Kudos to Boris et al. for all their hard work there.

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-16 Thread Kurt Griffiths
Right, graphing those sorts of variables has always been part of our test plan. 
What I’ve done so far was just some pilot tests, and I realize now that I 
wasn’t very clear on that point. I wanted to get a rough idea of where the 
Redis driver sat in case there were any obvious bug fixes that needed to be 
taken care of before performing more extensive testing. As it turns out, I did 
find one bug that has since been fixed.

Regarding latency, saying that it "is not important” is an exaggeration; it is 
definitely important, just not the only thing that is important. I have spoken 
with a lot of prospective Zaqar users since the inception of the project, and 
one of the common threads was that latency needed to be reasonable. For the use 
cases where they see Zaqar delivering a lot of value, requests don't need to be 
as fast as, say, ZMQ, but they do need something that isn’t horribly slow, 
either. They also want HTTP, multi-tenant, auth, durability, etc. The goal is 
to find a reasonable amount of latency given our constraints and also, 
obviously, be able to deliver all that at scale.

In any case, I’ve continue working through the test plan and will be publishing 
further test results shortly.

> graph latency versus number of concurrent active tenants

By tenants do you mean in the sense of OpenStack Tenants/Project-ID's or in  
the sense of “clients/workers”? For the latter case, the pilot tests I’ve done 
so far used multiple clients (though not graphed), but in the former case only 
one “project” was used.

From: Joe Gordon mailto:joe.gord...@gmail.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, September 12, 2014 at 1:45 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

If zaqar is like amazon SQS, then the latency for a single message and the 
throughput for a single tenant is not important. I wouldn't expect anyone who 
has latency sensitive work loads or needs massive throughput to use zaqar, as 
these people wouldn't use SQS either. The consistency of the latency (shouldn't 
change under load) and zaqar's ability to scale horizontally mater much more. 
What I would be great to see some other things benchmarked instead:

* graph latency versus number of concurrent active tenants
* graph latency versus message size
* How throughput scales as you scale up the number of assorted zaqar 
components. If one of the benefits of zaqar is its horizontal scalability, lets 
see it.
* How does this change with message batching?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-11 Thread Kurt Griffiths
On 9/11/14, 2:11 PM, "Devananda van der Veen" 
wrote:

>OK - those resource usages sound better. At least you generated enough
>load to saturate the uWSGI process CPU, which is a good point to look
>at performance of the system.
>
>At that peak, what was the:
>- average msgs/sec
>- min/max/avg/stdev time to [post|get|delete] a message

To be honest, it was a quick test and I didn’t note the exact metrics
other than eyeballing them to see that they were similar to the results
that I published for the scenarios that used the same load options (e.g.,
I just re-ran some of the same test scenarios).

Some of the metrics you mention aren’t currently reported by zaqar-bench,
but could be added easily enough. In any case, I think zaqar-bench is
going to end up being mostly useful to track relative performance gains or
losses on a patch-by-patch basis, and also as an easy way to smoke-test
both python-marconiclient and the service. For large-scale testing and
detailed metrics, other tools (e.g., Tsung, JMeter) are better for the
job, so I’ve been considering using them in future rounds.

>Is that 2,181 msg/sec total, or per-producer?

That metric was a combined average rate for all producers.

>
>I'd really like to see the total throughput and latency graphed as #
>of clients increases. Or if graphing isn't your thing, even just post
>a .csv of the raw numbers and I will be happy to graph it.
>
>It would also be great to see how that scales as you add more Redis
>instances until all the available CPU cores on your Redis host are in
>Use.

Yep, I’ve got a long list of things like this that I’d like to see in
future rounds of performance testing (and I welcome anyone in the
community with an interest to join in), but I have to balance that effort
with a lot of other things that are on my plate right now.

Speaking generally, I’d like to see the project bake this in over time as
part of the CI process. It’s definitely useful information not just for
the developers but also for operators in terms of capacity planning. We’ve
talked as a team about doing this with Rally (and in fact, some work has
been started there), but it may be useful to also run a large-scale test
on a regular basis (at least per milestone). Regardless, I think it would
be great for the Zaqar team to connect with other projects (at the
summit?) who are working on perf testing to swap ideas, collaborate on
code/tools, etc.

--KG


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-10 Thread Kurt Griffiths
On 9/10/14, 3:58 PM, "Devananda van der Veen" 
wrote:

>I'm going to assume that, for these benchmarks, you configured all the
>services optimally.

Sorry for any confusion; I am not trying to hide anything about the setup.
I thought I was pretty transparent about the way uWSGI, MongoDB, and Redis
were configured. I tried to stick to mostly default settings to keep
things simple, making it easier for others to reproduce/verify the results.

Is there further information about the setup that you were curious about
that I could provide? Was there a particular optimization that you didn’t
see that you would recommend?

>I'm not going to question why you didn't run tests
>with tens or hundreds of concurrent clients,

If you review the different tests, you will note that a couple of them
used at least 100 workers. That being said, I think we ought to try higher
loads in future rounds of testing.

>or why you only ran the
>tests for 10 seconds.

In Round 1 I did mention that i wanted to do a followup with a longer
duration. However, as I alluded to in the preamble for Round 2, I kept
things the same for the redis tests to compare with the mongo ones done
previously.

We’ll increase the duration in the next round of testing.

>Instead, I'm actually going to question how it is that, even with
>relatively beefy dedicated hardware (128 GB RAM in your storage
>nodes), Zaqar peaked at around 1,200 messages per second.

I went back and ran some of the tests and never saw memory go over ~20M
(as observed with redis-top) so these same results should be obtainable on
a box with a lot less RAM. Furthermore, the tests only used 1 CPU on the
Redis host, so again, similar results should be achievable on a much more
modest box.

FWIW, I went back and ran a couple scenarios to get some more data points.
First, I did one with 50 producers and 50 observers. In that case, the
single CPU on which the OS scheduled the Redis process peaked at 30%. The
second test I did was with 50 producers + 5 observers + 50 consumers
(which claim messages and delete them rather than simply page through
them). This time, Redis used 78% of its CPU. I suppose this should not be
surprising because the consumers do a lot more work than the observers.
Meanwhile, load on the web head was fairly high; around 80% for all 20
CPUs. This tells me that python and/or uWSGI are working pretty hard to
serve these requests, and there may be some opportunities to optimize that
layer. I suspect there are also some opportunities to reduce the number of
Redis operations and roundtrips required to claim a batch of messages.

The other thing to consider is that in these first two rounds I did not
test increasing amounts of load (number of clients performing concurrent
requests) and graph that against latency and throughput. Out of curiosity,
I just now did a quick test to compare the messages enqueued with 50
producers + 5 observers + 50 consumers vs. adding another 50 producer
clients and found that the producers were able to post 2,181 messages per
second while giving up only 0.3 ms.

--KG

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-10 Thread Kurt Griffiths
Thanks! Looks good. Only thing I noticed was that footnotes were still
referenced, but did not appear at the bottom of the page.

On 9/10/14, 6:16 AM, "Flavio Percoco"  wrote:

>I've collected the information from both performance tests and put it in
>the project's wiki[0] Please, double check :D

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-09 Thread Kurt Griffiths
Hi folks,

In this second round of performance testing, I benchmarked the new Redis
driver. I used the same setup and tests as in Round 1 to make it easier to
compare the two drivers. I did not test Redis in master-slave mode, but
that likely would not make a significant difference in the results since
Redis replication is asynchronous[1].

As always, the usual benchmarking disclaimers apply (i.e., take these
numbers with a grain of salt; they are only intended to provide a ballpark
reference; you should perform your own tests, simulating your specific
scenarios and using your own hardware; etc.).

## Setup ##

Rather than VMs, I provisioned some Rackspace OnMetal[3] servers to
mitigate noisy neighbor when running the performance tests:

* 1x Load Generator
* Hardware
* 1x Intel Xeon E5-2680 v2 2.8Ghz
* 32 GB RAM
* 10Gbps NIC
* 32GB SATADOM
* Software
* Debian Wheezy
* Python 2.7.3
* zaqar-bench
* 1x Web Head
* Hardware
* 1x Intel Xeon E5-2680 v2 2.8Ghz
* 32 GB RAM
* 10Gbps NIC
* 32GB SATADOM
* Software
* Debian Wheezy
* Python 2.7.3
* zaqar server
* storage=mongodb
* partitions=4
* MongoDB URI configured with w=majority
* uWSGI + gevent
* config: http://paste.openstack.org/show/100592/
* app.py: http://paste.openstack.org/show/100593/
* 3x MongoDB Nodes
* Hardware
* 2x Intel Xeon E5-2680 v2 2.8Ghz
* 128 GB RAM
* 10Gbps NIC
* 2x LSI Nytro WarpDrive BLP4-1600[2]
* Software
* Debian Wheezy
* mongod 2.6.4
* Default config, except setting replSet and enabling periodic
  logging of CPU and I/O
* Journaling enabled
* Profiling on message DBs enabled for requests over 10ms
* 1x Redis Node
* Hardware
* 2x Intel Xeon E5-2680 v2 2.8Ghz
* 128 GB RAM
* 10Gbps NIC
* 2x LSI Nytro WarpDrive BLP4-1600[2]
* Software
* Debian Wheezy
* Redis 2.4.14
* Default config (snapshotting and AOF enabled)
* One process

As in Round 1, Keystone auth is disabled and requests go over HTTP, not
HTTPS. The latency introduced by enabling these is outside the control of
Zaqar, but should be quite minimal (speaking anecdotally, I would expect
an additional 1-3ms for cached tokens and assuming an optimized TLS
termination setup).

For generating the load, I again used the zaqar-bench tool. I would like
to see the team complete a large-scale Tsung test as well (including a
full HA deployment with Keystone and HTTPS enabled), but decided not to
wait for that before publishing the results for the Redis driver using
zaqar-bench.

CPU usage on the Redis node peaked at around 75% for the one process. To
better utilize the hardware, a production deployment would need to run
multiple Redis processes and use Zaqar's backend pooling feature to
distribute queues across the various instances.

Several different messaging patterns were tested, taking inspiration
from: https://wiki.openstack.org/wiki/Use_Cases_(Zaqar)

Each test was executed three times and the best time recorded.

A ~1K sample message (1398 bytes) was used for all tests.

## Results ##

### Event Broadcasting (Read-Heavy) ###

OK, so let's say you have a somewhat low-volume source, but tons of event
observers. In this case, the observers easily outpace the producer, making
this a read-heavy workload.

Options
* 1 producer process with 5 gevent workers
* 1 message posted per request
* 2 observer processes with 25 gevent workers each
* 5 messages listed per request by the observers
* Load distributed across 4[6] queues
* 10-second duration

Results
* Redis
* Producer: 1.7 ms/req,  585 req/sec
* Observer: 1.5 ms/req, 1254 req/sec
* Mongo
* Producer: 2.2 ms/req,  454 req/sec
* Observer: 1.5 ms/req, 1224 req/sec

### Event Broadcasting (Balanced) ###

This test uses the same number of producers and consumers, but note that
the observers are still listing (up to) 5 messages at a time[4], so they
still outpace the producers, but not as quickly as before.

Options
* 2 producer processes with 25 gevent workers each
* 1 message posted per request
* 2 observer processes with 25 gevent workers each
* 5 messages listed per request by the observers
* Load distributed across 4 queues
* 10-second duration

Results
* Redis
* Producer: 1.4 ms/req, 1374 req/sec
* Observer: 1.6 ms/req, 1178 req/sec
* Mongo
* Producer: 2.2 ms/req, 883 req/sec
* Observer: 2.8 ms/req, 348 req/sec

### Point-to-Point Messaging ###

In this scenario I simulated one client sending messages directly to a
different client. Only one queue is required in this case[5].

Options
* 1 producer process with 1 gevent worker

Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-04 Thread Kurt Griffiths
Thanks for your comments Gordon. I appreciate where you are coming from
and I think we are actually in agreement on a lot of things.

I just want to make it clear that from the very beginning of the project
the team has tried to communicate (but perhaps could have done a better
job at it) that we aren’t trying to displace other messaging systems that
are clearly delivering a lot of value today.

In fact, I personally have long been a proponent of using the best tool
for the job. The Zaqar project was kicked off at an unconference session
several summits ago because the community saw a need that was not covered
by other messaging systems. Does that mean those other systems are “bad”
or “wrong”? Of course not. It simply means that there are some cases where
those other systems aren’t the best tool for the job, and another tool is
needed (and vice versa).

Does that other tool look *exactly* like Zaqar? Probably not. But a lot of
people have told us Zaqar--in its current form--already delivers a lot of
value that they can’t get from other messaging systems that are available
to them. Zaqar, like any open source project, is a manifestation of lots
of peoples' ideas, and will evolve over time to meet the needs of the
community. 

Does a Qpid/Rabbit/Kafka provisioning service make sense? Probably. Would
such a service totally overlap in terms of use-cases with Zaqar? Community
feedback suggests otherwise. Will there be some other kind of thing that
comes out of the woodwork? Possibly. (Heck, if something better comes
along I for one have no qualms in shifting resources to the more elegant
solution--again, use the best tool for the job.) This process happens all
the time in the broader open-source world. But this process takes a
healthy amount of time, plus broad exposure and usage, which is something
that you simply don’t get as a non-integrated project in the OpenStack
ecosystem.

In any case, it’s pretty clear to me that Zaqar graduating should not be
viewed as making it "the officially blessed messaging service for the
cloud” and nobody is allowed to have any other ideas, ever. If that
happens, it’s only a symptom of a deeper perception/process problem that
is far from unique to Zaqar. In fact, I think it touches on all
non-integrated projects, and many integrated ones as well.

--Kurt

On 9/4/14, 12:38 PM, "Gordon Sim"  wrote:
>
>This is not intended as criticism of Zaqar in anyway. In my opinion,
>'reinvention' is not necessarily a bad thing. It's just another way of
>saying innovative approach and/or original thinking, both of which are
>good and both of which I think are important in the context of
>communicating in the cloud.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] FFE request for API v1.1 Response Document Changes

2014-09-04 Thread Kurt Griffiths
Sounds OK to me, assuming we can get this done in the next week so the
team still has time for comprehensive testing (and we commit thereto)
before the RC.

On 9/4/14, 1:39 PM, "Flavio Percoco"  wrote:

>On 09/04/2014 06:01 PM, Victoria Martínez de la Cruz wrote:
>> Hi all,
>> 
>> I would like to request a FFE for 3 change sets to complete the
>> API v1.1 Response Document Changes blueprint.
>> 
>> Topic on Gerrit:
>> https://review.openstack.org/#/q/topic:bp/api-v1,n,z
>> 
>> Blueprint on Launchpad:
>> 
>>https://blueprints.launchpad.net/zaqar/+spec/api-v1.1-response-document-c
>>hanges
>> 
>> The required changes to close this blueprint are trivial, therefore the
>> risk is low.
>
>Hi Victoria,
>
>The above sounds fair to me. If other folks agree, I think we can go
>ahead and complete this work now.
>
>Thanks for sending the email,
>Flavio
>
>-- 
>@flaper87
>Flavio Percoco
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Early proposals for design summit sessions

2014-09-02 Thread Kurt Griffiths
Thanks Flavio, I added a few thoughts.

On 8/28/14, 3:27 AM, "Flavio Percoco"  wrote:

>Greetings,
>
>I'd like to join the early coordination effort for design sessions. I've
>shamelessly copied Doug's template for Oslo into a new etherpad so we
>can start proposing sessions there.
>
>https://etherpad.openstack.org/p/kilo-zaqar-summit-topics
>
>Flavio
>
>-- 
>@flaper87
>Flavio Percoco
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)

2014-09-02 Thread Kurt Griffiths
Sure thing, I’ll add that to my list of things to try in “Round 2” (coming
later this week).

On 8/28/14, 9:05 AM, "Jay Pipes"  wrote:

>On 08/26/2014 05:41 PM, Kurt Griffiths wrote:
>>  * uWSGI + gevent
>>  * config: http://paste.openstack.org/show/100592/
>>  * app.py: http://paste.openstack.org/show/100593/
>
>Hi Kurt!
>
>Thanks for posting the benchmark configuration and results. Good stuff :)
>
>I'm curious about what effect removing http-keepalive from the uWSGI
>config would make. AIUI, for systems that need to support lots and lots
>of random reads/writes from lots of tenants, using keepalive sessions
>would cause congestion for incoming new connections, and may not be
>appropriate for such systems.
>
>Totally not a big deal; really, just curious if you'd run one or more of
>the benchmarks with keepalive turned off and what results you saw.
>
>Best,
>-jay
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] [marconi] Removing GET message by ID in v1.1 (Redux)

2014-09-02 Thread Kurt Griffiths
Thanks everyone for your feedback. I think we have a consensus that this
sort of change would be best left to v2 of the API. We can start planning
v2 of the API at the Paris summit, and target some kind of “community
preview” of it to be released as part of Kilo.

On 8/29/14, 11:02 AM, "Everett Toews"  wrote:

>On Aug 28, 2014, at 3:08 AM, Flavio Percoco  wrote:
>
>> Unfortunately, as Nataliia mentioned, we can't just get rid of it in
>> v1.1 because that implies a major change in the API, which would require
>> a major release. What we can do, though, is start working on a spec for
>> the V2 of the API.
>
>+1
>
>Please don’t make breaking changes in minor version releases. v2 would be
>the place for this change.
>
>Thanks,
>Everett
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Heat] Creating accounts in Keystone

2014-08-27 Thread Kurt Griffiths
On 8/25/14, 9:50 AM, "Ryan Brown"  wrote:

>I'm actually quite partial to roles because, in my experience, service
>accounts rarely have their credentials rotated more than once per eon.
>Having the ability to let instances grab tokens would certainly help
>Heat, especially if we start using Zaqar (the artist formerly known as
>marconi).
>

According to AWS docs, IAM Roles allow you to "Define which API actions
and resources the application can use after assuming the role.” What would
it take to implement this in OpenStack? Currently, Keystone roles seem to
be more oriented toward cloud operators, not end users. This quote from
the Keystone docs[1] is telling:

If you wish to restrict users from performing operations in, say,
the Compute service, you need to create a role in the Identity
Service and then modify /etc/nova/policy.json so that this role is
required for Compute operations.

On 8/25/14, 9:49 AM, "Zane Bitter"  wrote:

>In particular, even if a service like Zaqar or Heat implements their own
>authorisation (e.g. the user creating a Zaqar queue supplies lists of
>the accounts that are allowed to read or write to it, respectively), how
>does the user ensure that the service accounts they create will not have
>access to other OpenStack APIs? IIRC the default policy.json files
>supplied by the various projects allow non-admin operations from any
>account with a role in the project.
>

It seems like end users need to be able to define custom roles and
policies.

Some example use cases for the sake of discussion:

1. App developer sends a request to Zaqar to create a queue named
   “customer-orders"
2. Zaqar creates a queue named "customer-orders"
3. App developer sends a request to Keystone to create a role, "role-x",
   for App Component X
4. Keystone creates role-x
5. App developer sends requests to Keystone to create a service user,
   “user-x” and associate it with role-x
6. Keystone creates user-x and gives it role-x
7. App developer sends a request to Zaqar to create a policy,
   “customer-orders-observer”, and associate that policy with role-x. The
   policy only allows GETing (listing) messages from the customer-orders
   queue
8. Zaqar creates customer-orders-observer and notes that it is associated
   with role-x

Later on...

1. App Component X sends a request to Zaqar, including an auth token
2. Zaqar sends a request to Keystone asking for roles associated with the
   given token
3. Keystone returns one or more roles, including role-x
4. Zaqar checks for any user-defined policies associated with the roles,
   including role-x, and finds customer-orders-observer
5. Zaqar verifies that the requested operation is allowed according to
   customer-orders-observer

We should also compare and contrast this with signed URLs ala Swift’s
tempurl. For example, service accounts do not have to be created or
managed in the case of tempurl.

--Kurt

[1]: http://goo.gl/5UBMwR [http://docs.openstack.org]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] [marconi] Removing GET message by ID in v1.1 (Redux)

2014-08-27 Thread Kurt Griffiths
Crew, as we continue implementing v1.1 in anticipation for a “public preview” 
at the summit, I’ve started to wonder again about removing the ability to GET a 
message by ID from the API. Previously, I was concerned that it may be too 
disruptive a change and should wait for 2.0. But consider this: in order to GET 
a message by ID you already have to have either listed or claimed that message, 
in which case you already have the message. Therefore, this operation would 
appear to have no practical purpose, and so probably won’t be missed by users 
if we remove it.

Am I missing something? What does everyone think about removing getting 
messages by ID in v1.1?

--Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)

2014-08-26 Thread Kurt Griffiths
Correction: there were 25 workers per producer process, not 10.

On 8/26/14, 4:41 PM, "Kurt Griffiths"  wrote:

>### Event Broadcasting (Balanced) ###
>
>This test uses the same number of producers and consumers, but note that
>the observers are still listing (up to) 5 messages at a time[5], so they
>still outpace the producers, but not as quickly as before.
>
>Options
>* 2 producer processes with 10 gevent workers each
>* 1 message posted per request
>* 2 observer processes with 25 gevent workers each
>* 5 messages listed per request by the observers
>* Load distributed across 4 queues
>* 10-second duration
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)

2014-08-26 Thread Kurt Griffiths
Hi folks,

I ran some rough benchmarks to get an idea of where Zaqar currently stands
re latency and throughput for Juno. These results are by no means
conclusive, but I wanted to publish what I had so far for the sake of
discussion.

Note that these tests do not include results for our new Redis driver, but
I hope to make those available soon.

As always, the usual disclaimers apply (i.e., benchmarks mostly amount to
lies; these numbers are only intended to provide a ballpark reference; you
should perform your own tests, simulating your specific scenarios and
using your own hardware; etc.).

## Setup ##

Rather than VMs, I provisioned some Rackspace OnMetal[8] servers to
mitigate noisy neighbor when running the performance tests:

* 1x Load Generator
* Hardware 
* 1x Intel Xeon E5-2680 v2 2.8Ghz
* 32 GB RAM
* 10Gbps NIC
* 32GB SATADOM
* Software
* Debian Wheezy
* Python 2.7.3
* zaqar-bench from trunk with some extra patches[1]
* 1x Web Head
* Hardware 
* 1x Intel Xeon E5-2680 v2 2.8Ghz
* 32 GB RAM
* 10Gbps NIC
* 32GB SATADOM
* Software
* Debian Wheezy
* Python 2.7.3
* zaqar server from trunk @47e07cad
* storage=mongodb
* partitions=4
* MongoDB URI configured with w=majority
* uWSGI + gevent
* config: http://paste.openstack.org/show/100592/
* app.py: http://paste.openstack.org/show/100593/
* 3x MongoDB Nodes
* Hardware 
* 2x Intel Xeon E5-2680 v2 2.8Ghz
* 128 GB RAM
* 10Gbps NIC
* 2x LSI Nytro WarpDrive BLP4-1600[2]
* Software
* Debian Wheezy
* mongod 2.6.4
* Default config, except setting replSet and enabling periodic
  logging of CPU and I/O
* Journaling enabled
* Profiling on message DBs enabled for requests over 10ms

For generating the load, I used the zaqar-bench tool we created during
Juno as a stepping stone toward integration with Rally. Although the tool
is still fairly rough, I thought it good enough to provide some useful
data[3]. The tool uses the python-zaqarclient library.

Note that I didn’t push the servers particularly hard for these tests; web
head CPUs averaged around 20%, while the mongod primary’s CPU usage peaked
at around 10% with DB locking peaking at 5%.

Several different messaging patterns were tested, taking inspiration
from: https://wiki.openstack.org/wiki/Use_Cases_(Zaqar)

Each test was executed three times and the best time recorded.

A ~1K sample message (1398 bytes) was used for all tests.

## Results ##

### Event Broadcasting (Read-Heavy) ###

OK, so let's say you have a somewhat low-volume source, but tons of event
observers. In this case, the observers easily outpace the producer, making
this a read-heavy workload.

Options
* 1 producer process with 5 gevent workers
* 1 message posted per request
* 2 observer processes with 25 gevent workers each
* 5 messages listed per request by the observers
* Load distributed across 4[7] queues
* 10-second duration[4]

Results
* Producer: 2.2 ms/req,  454 req/sec
* Observer: 1.5 ms/req, 1224 req/sec

### Event Broadcasting (Balanced) ###

This test uses the same number of producers and consumers, but note that
the observers are still listing (up to) 5 messages at a time[5], so they
still outpace the producers, but not as quickly as before.

Options
* 2 producer processes with 10 gevent workers each
* 1 message posted per request
* 2 observer processes with 25 gevent workers each
* 5 messages listed per request by the observers
* Load distributed across 4 queues
* 10-second duration

Results
* Producer: 2.2 ms/req, 883 req/sec
* Observer: 2.8 ms/req, 348 req/sec

### Point-to-Point Messaging ###

In this scenario I simulated one client sending messages directly to a
different client. Only one queue is required in this case[6].

Note the higher latency. While running the test there were 1-2 message
posts that skewed the average by taking much longer (~100ms) than the
others to complete. Such outliers are probably present in the other tests
as well, and further investigation is need to discover the root cause.

Options
* 1 producer process with 1 gevent worker
* 1 message posted per request
* 1 observer process with 1 gevent worker
* 1 message listed per request
* All load sent to a single queue
* 10-second duration

Results
* Producer: 5.5 ms/req, 179 req/sec
* Observer: 3.5 ms/req, 278 req/sec

### Task Distribution ###

This test uses several producers and consumers in order to simulate
distributing tasks to a worker pool. In contrast to the observer worker
type, consumers claim and delete messages in such a way that each message
is processed once and only once.

Options
* 2 producer processes with 25 gevent worker

Re: [openstack-dev] [Marconi] All-hands documentation day

2014-08-01 Thread Kurt Griffiths
I’m game for thursday. Love to help out.

On 8/1/14, 2:26 AM, "Flavio Percoco"  wrote:

>On 07/31/2014 09:57 PM, Victoria Martínez de la Cruz wrote:
>> Hi everyone,
>> 
>> Earlier today I went through the documentation requirements for
>> graduation [0] and it looks like there is some work do to.
>> 
>> The structure we should follow is detailed
>> in https://etherpad.openstack.org/p/marconi-graduation.
>> 
>> It would be nice to do an all-hands documentation day next week to make
>> this happen.
>> 
>> Can you join us? When is it better for you?
>
>Hey Vicky,
>
>Awesome work, thanks for putting this together.
>
>I'd propose doing it on Thursday since, hopefully, some other patches
>will land during that week that will require documentation too.
>
>Flavio,
>
>> 
>> My best,
>> 
>> Victoria
>> 
>> [0] 
>>https://github.com/openstack/governance/blob/master/reference/incubation-
>>integration-requirements.rst#documentation--user-support-1
>
>
>-- 
>@flaper87
>Flavio Percoco
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Victoria Martínez de la Cruz as a core reviewer

2014-08-01 Thread Kurt Griffiths
Ah, oops, somehow missed that. :p

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 1, 2014 at 12:55 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [marconi] Proposal to add Victoria Martínez de la 
Cruz as a core reviewer

There is another thread going on for the same 
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg30897.html
We sure need vkmc in the core :)

From: Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 1, 2014 1:17 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [marconi] Proposal to add Victoria Martínez de la Cruz 
as a core reviewer

Hi crew, I’d like to propose Vicky (vkmc) be added to Marconi’s core reviewer 
team. She is a regular contributor in terms of both code and reviews, is an 
insightful and regular participant in team discussions, and leads by example in 
terms of quality of work, treating others as friends and family, and being 
honest and constructive in her community participation.

Marconi core reviewers, please respond with +1 or -1 per your vote on adding 
Vicky.

--
Kurt G. (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Proposal to add Victoria Martínez de la Cruz as a core reviewer

2014-08-01 Thread Kurt Griffiths
Hi crew, I’d like to propose Vicky (vkmc) be added to Marconi’s core reviewer 
team. She is a regular contributor in terms of both code and reviews, is an 
insightful and regular participant in team discussions, and leads by example in 
terms of quality of work, treating others as friends and family, and being 
honest and constructive in her community participation.

Marconi core reviewers, please respond with +1 or -1 per your vote on adding 
Vicky.

--
Kurt G. (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Juno / Graduation Planning Session Today

2014-07-30 Thread Kurt Griffiths
Hi everyone, sorry for the short notice, but we are going to hold a special 
roadmap planning meeting today. Everyone is welcome to attend, but I esp. need 
core reviewers to attend:

When: 2100 UTC
Where: #openstack-marconi
Agenda: https://etherpad.openstack.org/p/marconi-scratch

Hope to see you there!

—
Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] New name for the project

2014-07-30 Thread Kurt Griffiths
Hi everyone, we have discussed a few new names for the project to avoid 
trademark issues. Previously, we had chosen “Naav” but several people weren’t 
feeling great about that name. So, we discussed this  today in 
#openstack-marconi and got consensus to rename Marconi to Zaqar. If anyone has 
any vehement objections, let me know right away, otherwise I’d like to move 
forward on the new name.

Thanks,
Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-28 Thread Kurt Griffiths
Oops, sorry folks, one correction. We will be meeting on Mondays starting
next week, rather than Tuesdays as we have been. This week we will not
hold our regular meeting, but may do a special j-3 planning session (stay
tuned for details).

On 7/28/14, 10:43 AM, "Kurt Griffiths" 
wrote:

>Everyone, after discussing further with the team on IRC, we’ve decided to
>do the following:
>
>Starting this week, we will meet in a different channel,
>#openstack-meeting-3. We will alternate between 2100 UTC and 1500 UTC,
>starting with 2100 UTC this week.
>
>I will update the wiki.
>
>On 7/23/14, 11:06 AM, "Kurt Griffiths" 
>wrote:
>
>>OK, I just checked and 1400 and 1500 are already taken, unless we want to
>>move our meetings to #openstack-meeting-3. If we want to stick with
>>#openstack-meeting-alt, it will have to be 1300 UTC.
>>
>>On 7/22/14, 5:28 PM, "Flavio Percoco"  wrote:
>>
>>>On 07/22/2014 06:08 PM, Kurt Griffiths wrote:
>>>> FYI, we chatted about this in #openstack-marconi today and decided to
>>>>try
>>>> 2100 UTC for tomorrow. If we would like to alternate at an earlier
>>>>time
>>>> every other week, is 1900 UTC good, or shall we do something more like
>>>> 1400 UTC?
>>>
>>>
>>>We can keep the same time we're using, if possible. That is, 15UTC. If
>>>that slot is taken, then 14UTC sounds good.
>>>
>>>Cheers,
>>>Flavio
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-28 Thread Kurt Griffiths
Everyone, after discussing further with the team on IRC, we’ve decided to
do the following:

Starting this week, we will meet in a different channel,
#openstack-meeting-3. We will alternate between 2100 UTC and 1500 UTC,
starting with 2100 UTC this week.

I will update the wiki.

On 7/23/14, 11:06 AM, "Kurt Griffiths" 
wrote:

>OK, I just checked and 1400 and 1500 are already taken, unless we want to
>move our meetings to #openstack-meeting-3. If we want to stick with
>#openstack-meeting-alt, it will have to be 1300 UTC.
>
>On 7/22/14, 5:28 PM, "Flavio Percoco"  wrote:
>
>>On 07/22/2014 06:08 PM, Kurt Griffiths wrote:
>>> FYI, we chatted about this in #openstack-marconi today and decided to
>>>try
>>> 2100 UTC for tomorrow. If we would like to alternate at an earlier time
>>> every other week, is 1900 UTC good, or shall we do something more like
>>> 1400 UTC?
>>
>>
>>We can keep the same time we're using, if possible. That is, 15UTC. If
>>that slot is taken, then 14UTC sounds good.
>>
>>Cheers,
>>Flavio
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-23 Thread Kurt Griffiths
OK, I just checked and 1400 and 1500 are already taken, unless we want to
move our meetings to #openstack-meeting-3. If we want to stick with
#openstack-meeting-alt, it will have to be 1300 UTC.

On 7/22/14, 5:28 PM, "Flavio Percoco"  wrote:

>On 07/22/2014 06:08 PM, Kurt Griffiths wrote:
>> FYI, we chatted about this in #openstack-marconi today and decided to
>>try
>> 2100 UTC for tomorrow. If we would like to alternate at an earlier time
>> every other week, is 1900 UTC good, or shall we do something more like
>> 1400 UTC?
>
>
>We can keep the same time we're using, if possible. That is, 15UTC. If
>that slot is taken, then 14UTC sounds good.
>
>Cheers,
>Flavio

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-22 Thread Kurt Griffiths
FYI, we chatted about this in #openstack-marconi today and decided to try
2100 UTC for tomorrow. If we would like to alternate at an earlier time
every other week, is 1900 UTC good, or shall we do something more like
1400 UTC?

On 7/21/14, 11:21 AM, "Kurt Griffiths" 
wrote:

>I think Wednesday would be best. That way we can get an update on all the
>bugs and blueprints before the weekly 1:1 project status meetings with
>Thierry on Thursday. Mondays are often pretty busy with everyone having
>meetings and catchup from the weekend.
>
>If we do 2100 UTC, that is 9am NZT. Shall we alternate between 1900 and
>2100 UTC on Wednesdays?
>
>Also, when will we meet this week? Perhaps we should keep things the same
>one more time while we get the new schedule finalized here on the ML.
>
>On 7/17/14, 11:27 AM, "Flavio Percoco"  wrote:
>
>>On 07/16/2014 06:31 PM, Malini Kamalambal wrote:
>>> 
>>> On 7/16/14 4:43 AM, "Flavio Percoco"  wrote:
>>> 
>>>> On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
>>>>> Hi folks, we¹ve been talking about this in IRC, but I wanted to bring
>>>>>it
>>>>> to the ML to get broader feedback and make sure everyone is aware.
>>>>>We¹d
>>>>> like to change our meeting time to better accommodate folks that live
>>>>> around the globe. Proposals:
>>>>>
>>>>> Tuesdays, 1900 UTC
>>>>> Wednesdays, 2000 UTC
>>>>> Wednesdays, 2100 UTC
>>>>>
>>>>> I believe these time slots are free, based
>>>>> on: https://wiki.openstack.org/wiki/Meetings
>>>>>
>>>>> Please respond with ONE of the following:
>>>>>
>>>>> A. None of these times work for me
>>>>> B. An ordered list of the above times, by preference
>>>>> C. I am a robot
>>>>
>>>> I don't like the idea of switching days :/
>>>>
>>>> Since the reason we're using Wednesday is because we don't want the
>>>> meeting to overlap with the TC and projects meeting, what if we change
>>>> the day of both meeting times in order to keep them on the same day
>>>>(and
>>>> perhaps also channel) but on different times?
>>>>
>>>> I think changing day and time will be more confusing than just
>>>>changing
>>>> the time.
>>> 
>>> If we can find an agreeable time on a non Tuesday, I take the ownership
>>>of
>>> pinging & getting you to #openstack-meeting-alt ;)
>>> 
>>>>From a quick look, #openstack-meeting-alt is free on Wednesdays on both
>>>> times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
>>>> folks?
>>> 
>>> 1500 UTC might still be too early for our NZ folks - I thought we
>>>wanted
>>> to have the meeting at/after 1900 UTC.
>>> That being said, I will be able to attend only part of the meeting any
>>> time after 1900 UTC - unless it is @ Thursday 1900 UTC
>>> Sorry for making this a puzzle :(
>>
>>We'll have 2 times. The idea is to keep the current time and have a
>>second time slot that is good for NZ folks. What I'm proposing is to
>>pick a day in the week that is good for both times and just rotate on
>>the time instead of time+day_of_the_week.
>>
>>Again, the proposal is not to have 1 time but just 1 day and alternate
>>times on that day. For example, Glance meetings are *always* on
>>Thursdays and time is alternated each other week. We can do the same for
>>Marconi on Mondays, Wednesdays or Fridays.
>>
>>Thoughts?
>>
>>
>>Flavio
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-21 Thread Kurt Griffiths
I think Wednesday would be best. That way we can get an update on all the
bugs and blueprints before the weekly 1:1 project status meetings with
Thierry on Thursday. Mondays are often pretty busy with everyone having
meetings and catchup from the weekend.

If we do 2100 UTC, that is 9am NZT. Shall we alternate between 1900 and
2100 UTC on Wednesdays?

Also, when will we meet this week? Perhaps we should keep things the same
one more time while we get the new schedule finalized here on the ML.

On 7/17/14, 11:27 AM, "Flavio Percoco"  wrote:

>On 07/16/2014 06:31 PM, Malini Kamalambal wrote:
>> 
>> On 7/16/14 4:43 AM, "Flavio Percoco"  wrote:
>> 
>>> On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
>>>> Hi folks, we¹ve been talking about this in IRC, but I wanted to bring
>>>>it
>>>> to the ML to get broader feedback and make sure everyone is aware.
>>>>We¹d
>>>> like to change our meeting time to better accommodate folks that live
>>>> around the globe. Proposals:
>>>>
>>>> Tuesdays, 1900 UTC
>>>> Wednesdays, 2000 UTC
>>>> Wednesdays, 2100 UTC
>>>>
>>>> I believe these time slots are free, based
>>>> on: https://wiki.openstack.org/wiki/Meetings
>>>>
>>>> Please respond with ONE of the following:
>>>>
>>>> A. None of these times work for me
>>>> B. An ordered list of the above times, by preference
>>>> C. I am a robot
>>>
>>> I don't like the idea of switching days :/
>>>
>>> Since the reason we're using Wednesday is because we don't want the
>>> meeting to overlap with the TC and projects meeting, what if we change
>>> the day of both meeting times in order to keep them on the same day
>>>(and
>>> perhaps also channel) but on different times?
>>>
>>> I think changing day and time will be more confusing than just changing
>>> the time.
>> 
>> If we can find an agreeable time on a non Tuesday, I take the ownership
>>of
>> pinging & getting you to #openstack-meeting-alt ;)
>> 
>>>From a quick look, #openstack-meeting-alt is free on Wednesdays on both
>>> times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
>>> folks?
>> 
>> 1500 UTC might still be too early for our NZ folks - I thought we wanted
>> to have the meeting at/after 1900 UTC.
>> That being said, I will be able to attend only part of the meeting any
>> time after 1900 UTC - unless it is @ Thursday 1900 UTC
>> Sorry for making this a puzzle :(
>
>We'll have 2 times. The idea is to keep the current time and have a
>second time slot that is good for NZ folks. What I'm proposing is to
>pick a day in the week that is good for both times and just rotate on
>the time instead of time+day_of_the_week.
>
>Again, the proposal is not to have 1 time but just 1 day and alternate
>times on that day. For example, Glance meetings are *always* on
>Thursdays and time is alternated each other week. We can do the same for
>Marconi on Mondays, Wednesdays or Fridays.
>
>Thoughts?
>
>
>Flavio
>
>-- 
>@flaper87
>Flavio Percoco
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Meeting time change

2014-07-15 Thread Kurt Griffiths
Hi folks, we’ve been talking about this in IRC, but I wanted to bring it to the 
ML to get broader feedback and make sure everyone is aware. We’d like to change 
our meeting time to better accommodate folks that live around the globe. 
Proposals:

Tuesdays, 1900 UTC
Wednessdays, 2000 UTC
Wednessdays, 2100 UTC

I believe these time slots are free, based on: 
https://wiki.openstack.org/wiki/Meetings

Please respond with ONE of the following:

A. None of these times work for me
B. An ordered list of the above times, by preference
C. I am a robot

Cheers,
Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Async I/O for Message Store Drivers

2014-06-27 Thread Kurt Griffiths
Folks, now that we have Py3K support, we need to start thinking about things 
that we need to do to tune performance when deploying on 3.3 or 3.4. I created 
a blueprint (we are still working on getting our spec process set up) to start 
the discussion here:

https://blueprints.launchpad.net/marconi/+spec/async-backend-drivers

I’d love to get your thoughts on this; it would be great if we could get some 
of this work done during Juno, although I suspect we may not have enough 
bandwidth to get everything where we want it to be until the K cycle.

--
Kurt Griffiths (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Python 3.3 Gate is Passing!

2014-06-26 Thread Kurt Griffiths
Hi everyone, I just wanted to to congratulate Nataliia on making Marconi one of 
the first OS project to pass the py33 gate!

https://pbs.twimg.com/media/BrEQrZiCMAAbfEX.png:large

Now, let’s make that gate voting. :D

---
Kurt Griffiths (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Decoupling backend drivers

2014-06-26 Thread Kurt Griffiths
Crew, I’d like to propose the following:

  1.  Decouple pool management from data storage (two separate drivers)
  2.  Keep pool management driver for sqla, but drop the sqla data storage 
driver
  3.  Provide a non-AGPL alternative to MongoDB that has feature parity and is 
at least as performant

Decoupling will make configuration less confusing, while allowing us to 
maintain drivers separately and give us the flexibility to choose the best tool 
for the job (BTFJ). Once that work is done, we can drop support for sqla  as a 
message store backend, since it isn’t a viable non-AGPL alternative to MongoDB. 
Instead, we can look into some other backends that offer a good mix of 
durability and performance.

What does everything think about this strategy?

--
Kurt Griffiths (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Juno-1 development milestone available

2014-06-12 Thread Kurt Griffiths
Hi folks,

Marconi’s first Juno milestone release is now available. It includes
several bug fixes, plus adds support for caching frequent DB queries as
part of the team's focus on performance tuning during the Juno cycle. This
release also includes an important refactoring of our API tests that will
allow us to deliver version 1.1 for the Juno-2 milestone release.

You can download the juno-1 release and review the changes here:

https://launchpad.net/marconi/+milestone/juno-1

Thanks to everyone who contributed to this first milestone! It’s great to
see all the new contributors.

--
Kurt Griffiths (kgriffs)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] ATL summit easel pad pix

2014-06-11 Thread Kurt Griffiths
Hey crew, as promised, I digitized the pages from the easel pad at our program 
pod in Atlanta. They are now available on the wiki along with a few notes to 
help everyone remember what all the scribbling is supposed to mean. :D

https://wiki.openstack.org/wiki/Juno/Notes/Marconi

Thanks everyone for the constructive conversations!

-KG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Python Client 0.0.2 released

2014-06-10 Thread Kurt Griffiths
This is a minor bug-fix release. Deleting claimed messages now works as
expected. You can get the latest client from PyPI, and a tarball is also
available:

https://pypi.python.org/pypi/python-marconiclient/
http://tarballs.openstack.org/python-marconiclient/

NOTE: Yes, the installation instructions on the README are out of date.
They will be be fixed in the next release. :p

Cheers!

-KG


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Kurt Griffiths
> Will Marconi only support HTTP as a transport, or will it add other
>protocols as well?

We are focusing on HTTP for Juno, but are considering adding a
lower-level, persistent transport (perhaps based on WebSocket) in the K
cycle.

> Can anyone describe what is unique about the Marconi design with respect
>to scalability?

TBH, I don’t know that there is anything terribly “unique” about it. :)

First of all, since Marconi uses HTTP and follows the REST architectural
style, you get all the associated scaling benefits from that.

Regarding the backend, Marconi has a notion of “pools", across which
queues can be sharded. Messages for an individual queue may not be
sharded across  multiple pools, but a single queue may be sharded
within a given pool,  depending on whether the driver supports it. In
any case, you can imagine each pool as encapsulating a single DB or
broker cluster. Once you reach the limits of scalability within your
initial pool (due to networking, hard limitations in the given backend,
etc.), you can provision other pools as needed.

> in what way is Marconi different to 'traditional message brokers' (which
>after all have been providing 'a production queuing service' for some
>time)?

That’s a great question. As I have said before, I think there is certainly
room for some kind of broker-as-a-service in the OpenStack ecosystem that
is more along the lines of Trove. Such a service would provision
single-tenant instances of a given broker and provide a framework for
adding value-add management features for said broker.

For some use cases, such a service could be a cost-effective solution, but
I don’t think it is the right answer for everyone. Not only due to cost,
but also because many developers (and sys admins) actually prefer an
HTTP-based API. Marconi’s multi-tenant, REST API was designed to serve
this market. 

> I understand that having HTTP as the protocol used by clients is of
>central importance. However many 'traditional message brokers’ have
>offered that as well.

Great point. In fact, I’d love to get more info on brokers that offer
support for HTTP. What are some examples? Do they support multi-tenancy?

-KG


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-10 Thread Kurt Griffiths
> What are 'message feeds' in the Marconi context, in more detail? And
>what aspect of them is it that message brokers don't support?

Great question. When I say “feeds” I mean a “feed” in the sense of RSS or
Atom. People do, in fact, use Atom to implement certain messaging
patterns. You can think of Marconi’s current API design as taking the idea
of message syndication and including the SQS-like semantics around
claiming a batch of messages for a period of time, after which the
messages return to the “pool” unless they are deleted in the interim.

I think the crux of the issue is that Marconi follows the REST
architectural style. As such, the client must track the state of where it
is in the queue it is consuming (to keep the server stateless). So, it
must be given some kind of marker, allowing it to page through messages in
the queue. 

Also noteworthy is that simply reading a message does not also delete
it, which affords the pub-sub messaging pattern. One could imagine
taking a More prescriptive approach to pub-sub by introducing some
sort of “exchange” resource, but the REST style generally encourages
working at the level of affordances (not to say we couldn’t stray if
need be; I am describing the API as it stands today).

To my knowledge, this API can’t be mapped directly to AMQP. Perhaps
thereare other types of brokers that can do it?

On 6/10/14, 7:17 AM, "Gordon Sim"  wrote:

>On 06/09/2014 08:31 PM, Kurt Griffiths wrote:
>> Lately we have been talking about writing drivers for traditional
>> message brokers that will not be able to support the message feeds part
>> of the API.
>
>Could you elaborate a little on this point? In some sense of the term at
>least, handling message feeds is what 'traditional' message brokers are
>all about. What are 'message feeds' in the Marconi context, in more
>detail? And what aspect of them is it that message brokers don't support?
>
>> I’ve started to think that having a huge part of the API
>> that may or may not “work”, depending on how Marconi is deployed, is not
>> a good story for users
>
>I agree, that certainly doesn't sound good.
>
>--Gordon.
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Kurt Griffiths
Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.

Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads

Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).

Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution 
operations (counter: there may be ways to continue sharing some code if the API 
is split)

Against:

  *   Requires operators to deploy a NoSQL cluster (counter: many operators are 
comfortable with NoSQL today)
  *   Currently requires MongoDB, which is AGPL (counter: a Redis driver is 
under development)
  *   A unified API is hard to tune for performance (counter: Redis driver 
should be able to handle high-throughput use cases, TBD)

I’d love to get everyone’s thoughts on these options; let's brainstorm for a 
bit, then we can home in on the option that makes the most sense. We may need 
to do some POCs or experiments to get enough information to make a good 
decision.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-05 Thread Kurt Griffiths
I just learned that some projects are thinking about having the specs process 
be the channel for submitting new feature ideas, rather than registering 
blueprints. I must admit, that would be kind of nice because it would provide 
some much-needed structure around the triaging process.

I wonder if we can get some benefit out of the spec process while still keeping 
it light? The temptation will be to start documenting everything in 
excruciating detail, but we can mitigate that by codifying some guidelines on 
our wiki and baking it into the team culture.

What does everyone think?

From: Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>
Date: Tuesday, June 3, 2014 at 9:34 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I think it becomes more useful the larger your team. With a smaller team it is 
easier to keep everyone on the same page just through the mailing list and IRC. 
As for where to document design decisions, the trick there is more one of being 
diligent about capturing and recording the why of every decision made in 
discussions and such; gerrit review history can help with that, but it isn’t 
free.

If we’d like to give the specs process a try, I think we could do an experiment 
in j-2 with a single bp. Depending on how that goes, we may do more in the K 
cycle. What does everyone think?

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 2:45 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

+1 – Requiring specs for every blueprint is going to make the development 
process very cumbersome, and will take us back to waterfall days.
I like how the Marconi team operates now, with design decisions being made in 
IRC/ team meetings.
So Spec might become more of an overhead than add value, given how our team 
functions.

'If' we agree to use Specs, we should use that only for the blue prints that 
make sense.
For example, the unit test decoupling that we are working on now – this one 
will be a good candidate to use specs, since there is a lot of back and forth 
going on how to do this.
On the other hand something like Tempest Integration for Marconi will not 
warrant a spec, since it is pretty straightforward what needs to be done.
In the past we have had discussions around where to document certain design 
decisions (e.g. Which endpoint/verb is the best fit for pop operation?)
Maybe spec is the place for these?

We should leave it to the implementor to decide, if the bp warrants a spec or 
not & what should be in the spec.


From: Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 1:33 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I’ve been in roles where enormous amounts of time were spent on writing specs, 
and in roles where specs where non-existent. Like most things, I’ve become 
convinced that success lies in moderation between the two extremes.

I think it would make sense for big specs, but I want to be careful we use it 
judiciously so that we don’t simply apply more process for the sake of more 
process. It is tempting to spend too much time recording every little detail in 
a spec, when that time could be better spent in regular communication between 
team members and with customers, and on iterating the code (short iterations 
between demo/testing, so you ensure you are on staying on track and can address 
design problems early, often).

IMO, specs are best used more as summaries, containing useful big-picture 
ideas, diagrams, and specific “memory pegs” to help us remember what was 
discussed and decided, and calling out specific “promises” for future 
conversations where certain design points are TBD.

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Adopt Spec

Hello all,

We are seeing more & more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more & start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-03 Thread Kurt Griffiths
I think it becomes more useful the larger your team. With a smaller team it is 
easier to keep everyone on the same page just through the mailing list and IRC. 
As for where to document design decisions, the trick there is more one of being 
diligent about capturing and recording the why of every decision made in 
discussions and such; gerrit review history can help with that, but it isn’t 
free.

If we’d like to give the specs process a try, I think we could do an experiment 
in j-2 with a single bp. Depending on how that goes, we may do more in the K 
cycle. What does everyone think?

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 2:45 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

+1 – Requiring specs for every blueprint is going to make the development 
process very cumbersome, and will take us back to waterfall days.
I like how the Marconi team operates now, with design decisions being made in 
IRC/ team meetings.
So Spec might become more of an overhead than add value, given how our team 
functions.

'If' we agree to use Specs, we should use that only for the blue prints that 
make sense.
For example, the unit test decoupling that we are working on now – this one 
will be a good candidate to use specs, since there is a lot of back and forth 
going on how to do this.
On the other hand something like Tempest Integration for Marconi will not 
warrant a spec, since it is pretty straightforward what needs to be done.
In the past we have had discussions around where to document certain design 
decisions (e.g. Which endpoint/verb is the best fit for pop operation?)
Maybe spec is the place for these?

We should leave it to the implementor to decide, if the bp warrants a spec or 
not & what should be in the spec.


From: Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 1:33 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I’ve been in roles where enormous amounts of time were spent on writing specs, 
and in roles where specs where non-existent. Like most things, I’ve become 
convinced that success lies in moderation between the two extremes.

I think it would make sense for big specs, but I want to be careful we use it 
judiciously so that we don’t simply apply more process for the sake of more 
process. It is tempting to spend too much time recording every little detail in 
a spec, when that time could be better spent in regular communication between 
team members and with customers, and on iterating the code (short iterations 
between demo/testing, so you ensure you are on staying on track and can address 
design problems early, often).

IMO, specs are best used more as summaries, containing useful big-picture 
ideas, diagrams, and specific “memory pegs” to help us remember what was 
discussed and decided, and calling out specific “promises” for future 
conversations where certain design points are TBD.

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Adopt Spec

Hello all,

We are seeing more & more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more & start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-02 Thread Kurt Griffiths
I’ve been in roles where enormous amounts of time were spent on writing specs, 
and in roles where specs where non-existent. Like most things, I’ve become 
convinced that success lies in moderation between the two extremes.

I think it would make sense for big specs, but I want to be careful we use it 
judiciously so that we don’t simply apply more process for the sake of more 
process. It is tempting to spend too much time recording every little detail in 
a spec, when that time could be better spent in regular communication between 
team members and with customers, and on iterating the code (short iterations 
between demo/testing, so you ensure you are on staying on track and can address 
design problems early, often).

IMO, specs are best used more as summaries, containing useful big-picture 
ideas, diagrams, and specific “memory pegs” to help us remember what was 
discussed and decided, and calling out specific “promises” for future 
conversations where certain design points are TBD.

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Adopt Spec

Hello all,

We are seeing more & more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more & start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Removing Get and Delete Messages by ID

2014-05-28 Thread Kurt Griffiths
Crew, as discussed in the last team meeting, I have updated the API v1.1 
spec to remove the 
ability to get one or more messages by ID. This was done to remove unnecessary 
complexity from the API, and to make it easier to support different types of 
message store backends.

However, this now leaves us with asymmetric semantics. On the one hand, we do 
not allow retrieving messages by ID, but we still support deleting them by ID. 
It seems to me that deleting a message only makes sense in the context of a 
claim or pop operation. In the case of a pop, the message is already deleted by 
the time the client receives it, so I don’t see a need for including a message 
ID in the response. When claiming a batch of messages, however, the client 
still needs some way to delete each message after processing it. In this case, 
we either need to allow the client to delete an entire batch of messages using 
the claim ID, or we still need individual message IDs (hrefs) that can be 
DELETEd.

Deleting a batch of messages can be accomplished in V1.0 using “delete multiple 
messages by ID”. Regardless of how it is done, I’ve been wondering if it is 
actually an anti-pattern; if a worker crashes after processing N messages, but 
before deleting those same N messages, the system is left with several messages 
that another worker will pick up and potentially reprocess, although the work 
has already been done. If the work is idempotent, this isn’t a big deal. 
Otherwise, the client will have to have a way to check whether a message has 
already been processed, ignoring it if it has. But whether it is 1 message or N 
messages left in a bad state by the first worker, the other worker has to 
follow the same logic, so perhaps it would make sense after all to simply allow 
deleting entire batches of claimed messages by claim ID, and not worrying about 
providing individual message hrefs/IDs for deletion.

With all this in mind, I’m starting to wonder if I should revert my changes to 
the spec, and wait to address these changes in the v2.0 API, since it seems 
that to do this right, we need to make some changes that are anything but 
“minor” (for a minor release).

What does everyone think? Should we postpone this work to 2.0?

—Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repo names

2014-05-27 Thread Kurt Griffiths
> +1 to code names.

Technically, if a program contains multiple projects, it would be more
correct to use the program name, but at this point I think it is pretty
ingrained in our culture (including IRC, mailing list and summits) to
refer to things by their code/project names, so IMO using those names will
be more intuitive to contributors.

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Kurt Griffiths
Good to know, thanks for clarifying. One thing I’m still fuzzy on, however, is 
why we want to deprecate use of UUID tokens in the first place? I’m just trying 
to understand the history here...

From: Morgan Fainberg 
mailto:morgan.fainb...@gmail.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, May 21, 2014 at 1:23 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] Concerns about the ballooning size of keystone 
tokens

This is part of what I was referencing in regards to lightening the data stored 
in the token. Ideally, we would like to see an "ID only" token that only 
contains the basic information to act. Some initial tests show these tokens 
should be able to clock in under 1k in size. However all the details are not 
fully defined yet. Coupled with this data reduction there will be explicit 
definitions of the data that is meant to go into the tokens. Some of the data 
we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes smoothly. 
But this is absolutely on the list of things we would like to address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>> wrote:
> adding another ~10kB to each request, just to save a once-a-day call to
>Keystone (ie uuid tokens) seems to be a really high price to pay for not
>much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Kurt Griffiths
> adding another ~10kB to each request, just to save a once-a-day call to
>Keystone (ie uuid tokens) seems to be a really high price to pay for not
>much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Juno Roadmap

2014-05-20 Thread Kurt Griffiths
Hi folks, I took the major work items we discussed at the summit and placed 
them into the three Juno milestones:

https://wiki.openstack.org/wiki/Roadmap_(Marconi)

Let me know what you think over the next few days. We will address any 
remaining questions and concerns at our next team meeting (next Tuesday at 1500 
UTC in #openstack-meeting-alt).

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] State of the Union

2014-05-08 Thread Kurt Griffiths
As we look forward to the OpenStack Summit in Atlanta, I wanted to take a 
moment to review the state of the program. With the close of the Icehouse 
cycle, the team has achieved a number of exciting milestones:

  *   Marconi's first official, production-ready "1.0" release is now available 
for download. This first release 
includes a battle-tested MongoDB driver, and production-ready drivers for 
additional backends are in the works.
  *   Marconi's v1.0 API is stable and ready to code against.
  *   Basic user and operator docs are now 
available, and we will be adding 
tons of new content during Juno.
  *   A reference client library (written in Python) is now available on PyPI 
which supports the entire v1.0 API. Support for other languages is available 
through Rackspace-supported SDKs.

Through this program, I’ve had the opportunity to work side-by-side with a 
fantastic group of people. We have an amazing team culture that values 
usability, quality, and community. Latest team stats:

  *
10+ organizations represented
  *   5 core reviewers
  *   6 interns (representing GSoC, GNOME OPW, Rackspace and Red Hat)

If you will be attending the Atlanta summit, please join us in our design 
sessions 
(pads), and at 
the Marconi unconference table. Here are some of the things we’ll be talking 
about:

  *   Integration. We'll be chatting with several programs (swift, horizon, 
heat, et al.) who have approached us about using Marconi to surface events to 
end users. We would also like to meet with the Barbican team to discuss how we 
can work together to implement message signing.
  *   Notifications. How can we leverage Marconi as a notifications platform, 
for pushing queued messages to web hooks, SMTP, SMS, APN/GCM, etc?
  *   Operational Maturity. What do we need to do to make Marconi super 
ops-friendly? We'll be talking about monitoring, logging, security, efficiency, 
and documentation.
  *   Scaling Individual Queues. How should we handle the situation of a queue 
outgrowing a single storage partition? How do we best balance hot queues across 
the partitions?
  *   Queue Flavors. How can we surface Marconi's ability to assign queues to 
heterogeneous backend storage partitions to end users, so that they have the 
freedom to make app-specific tradeoffs such as latency, durability, cost, etc?

Thanks to everyone who has helped the program reach this point!

If you are going to be in town for the summit, please join me and other members 
of the Marconi team in celebrating our first official Marconi release! We'll 
grab some food and hang out in downtown Atlanta for the evening. We are also 
getting a group together for a CNN tour before dinner.

Note: If anyone is interested in taking a tour of CNN 
before dinner, please purchase tickets IN ADVANCE for the 4:20pm tour. Express 
your interest in #openstack-marconi so we can get a group together. Thanks!

RSVP:

http://www.evite.com/event/036857GMBWYCGUKWCEPD2VGBNBA3GM

Cheers,
Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Python client library 0.0.1 released

2014-05-06 Thread Kurt Griffiths
Hi folks, I’m pleased to announce you can now  pip install python-marconiclient 
 and get support for the entire v1.0 API. Although the package will remain 
classified as “beta” while we polish off any rough edges, please take it for a 
spin and let us know what you think.

Kudos goes to Flavio Percoco for taking the lead on the client work, and to 
everyone who helped him get the project to this point. Thanks!

With this release, the juno-1 milestone is now open. Since we will be tracking 
the server releases going forward, I strongly encourage everyone who submits 
API changes from now on to contribute corresponding patches to the client, to 
help us synchronize functionality across both server and client releases. We 
would also love your help in contributing docs to the client project.

Project Home: https://launchpad.net/python-marconiclient
PyPI Package: 
https://pypi.python.org/pypi/python-marconiclient

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] ATL Summit Pads - Please Review

2014-05-05 Thread Kurt Griffiths
Hi everyone, I’ve seeded some pads corresponding to the design sessions we have 
scheduled; please review these and help me flesh them out this week, leading up 
to the summit:

  *   Tues 14:50 - Queue 
Flavors<https://etherpad.openstack.org/p/juno-marconi-queue-flavors> — Flavio 
Percoco
  *   Tues 15:40 - Notifications on 
Marconi<https://etherpad.openstack.org/p/juno-marconi-notifications-on-marconi> 
— Balaji Iyer
  *   Tues 16:40 - Marconi Dev/Ops 
Session<https://etherpad.openstack.org/p/ATL-marconi-ops> — TBD (need a 
volunteer to moderate this; let me know if you are interested)
  *   Tues 17:30 - Scaling an Individual 
Queue<https://etherpad.openstack.org/p/juno-marconi-scale-single-queue> — Kurt 
Griffiths

Also, I’ve set up some pads for the unconference sessions (@ the Marconi table) 
I know about:

  *   Signed 
messages<https://etherpad.openstack.org/p/juno-marconi-signed-messages>
  *   Benchmarking<https://etherpad.openstack.org/p/juno-marconi-benchmarking>
  *   Performance 
Tuning<https://etherpad.openstack.org/p/juno-marconi-perf-tuning>

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-04-04 Thread Kurt Griffiths
> It appears the current version of oslo.cache is going to bring in quite
>a few oslo libraries that we would not want keystone client to depend on
>[1]. Moving the middleware to a separate library would solve that.

I think it makes a lot of sense to separate out the middleware. Would this
be a new project under Identity or would it go to Oslo since it would be a
shared library among the other programs?

Kurt G. | @kgriffs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Marconi PTL Candidacy

2014-04-03 Thread Kurt Griffiths
Hi folks, I'd like to submit my name for serving during the Juno cycle as
the Queue Service PTL.

During my career I've had the opportunity to work in a wide variety of
roles in fields such as video game development, system utilities,
Internet marketing, and web services. This experience has given me a
holistic, pragmatic view on software development that I have tried to
leverage in my contributions to OpenStack. I believe that the best
software is smart (flexes to work at the user's level), useful (informed
by what users really need, not what we think they need), and pleasant
(optimized for happiness).

I've been heavily involved with Marconi from its inception, leading the
initial unconference session at the Grizzly summit, where we came together
as a community to fill what many saw as an obvious gap in the OpenStack
portfolio. I'd like to give a shout-out to Mark Atwood, Monty Taylor,
Jamie Painter, Allan Metts, Tim Simpson, and Flavio Percoco for their
early involvement in kick-starting the project. Thanks guys!

Marconi is key to enabling the development of web and mobile apps on top
of OpenStack, and we have also been hearing from several other programs
who are interested in using Marconi to surface events to end users (among
other things.)

The Marconi team has taken a pragmatic approach to the design of the API
and its architecture, inviting and valuing feedback from users and
operators all along the way. I think we can learn to do an even better job
at this during the Juno cycle.

A PTL has many responsibilities, but the ones I feel are most important
are these:

1. As a program facilitator, a PTL is responsible for keeping launchpad
groomed and up to date; watching out for logjams and misunderstandings,
working to resolve them quickly as they arise; and, finally, creating and
moderating multiple communication channels between contributors, and
between the team and the broader community.
2. As a culture champion, the PTL is responsible for leading by example
and growing a constructive team culture that values software quality and
application security. A culture where every voice is heard and valued. A
place where everyone feels safe expressing their ideas and concerns,
whatever they may be. A place where every individual feels appreciated and
supported.
3. As a user champion, the PTL is responsible for keeping the program
oriented toward a clear vision that is highly informed by user and
operator feedback.
4. As a senior technologist, the PTL is responsible for ensuring major
implementation decisions are rigorously vetted and revisited over time, as
necessary, to ensure the code is delivering on the program's vision (and
not creating scope creep).
5. As a liaison, the PTL is responsible for keeping their project aligned
with the broader OpenStack, Python and web development communities.

If elected, my priorities during Juno will include:

1. Operational Maturity: Marconi is already production-ready, but we still
have work to do to get to world-class reliability, monitoring, logging,
and efficiency.
2. Documentation: During Icehouse, Marconi made a good start on user and
operator manuals, and I would like to see those docs fleshed out, as well
as reworking the program wiki to make it much more informative and
engaging.
3. Security: During Juno I want to start doing per-milestone threat
modeling, and build out a suite of security tests.
4. Integration: I have heard from several other OpenStack programs who
would like to use Marconi, and so I look forward to working with them to
understand their needs and to assist them however we can.
5. Notifications: Beginning the work on the missing pieces needed to build
a notifications service on top of the Marconi messaging platform, that can
be used to surface events to end-users via SMS, email, web hooks, etc.
6. Graduation: Completing all remaining graduation requirements so that
Marconi can become integrated in the "K" cycle, which will allow other
programs to be more confident about taking dependencies on the service for
features they are planning.
7. Growth: I'd like to welcome several more contributors to the Marconi
core team, continue on-boarding new contributors and interns, and see
several more large deployments of Marconi in production.

---
Kurt Griffiths | @kgriffs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-03-31 Thread Kurt Griffiths
Hi folks, has there been any discussion on using oslo.cache within the 
auth_token middleware to allow for using other cache backends besides 
memcached? I didn’t find a Keystone blueprint for it, and was considering 
registering one for Juno if the team thinks this feature makes sense. I’d be 
happy to put some time into the implementation.

Kurt G. | @kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-27 Thread Kurt Griffiths
Ah, that is good to know. Is this documented somewhere?

On 3/26/14, 11:48 PM, "Sergey Lukjanov"  wrote:

>FWIW It's working on OS X, but gnu-getopt should be installed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-27 Thread Kurt Griffiths
P.S. - Any particular reason this script wasn’t written in Python? Seems
like that would avoid a lot of cross-platform gotchyas.

On 3/26/14, 11:48 PM, "Sergey Lukjanov"  wrote:

>FWIW It's working on OS X, but gnu-getopt should be installed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi] Backend options [was Re: Why is marconi a queue implementation vs a provisioning API?]

2014-03-27 Thread Kurt Griffiths
Matt Asay wrote:
> We want people using the world's most popular NoSQL database with the
>world's most popular open source cloud (OpenStack). I think our track
>record on this is 100% in the affirmative.

So, I think it is pretty clear that there are lots of people who would
like to use MongoDB and aren’t concerned about the way it is licensed.
However, there are also lots of people who would prefer another
production-ready option.

I think Marconi has room for 2-3 more drivers that are supported by the
team for production deployments. Two of the most promising candidates are
Redis and AMQP (specific broker TBD). Cassandra has also been proposed in
the past, but I don't think it’s a viable option due to the way deletes
are implemented[1].

If anyone has some other options that you think could be a good fit,
please make some suggestions and help us determine the future of
Marconi.

---
Kurt G. | @kgriffs

[1]: http://goo.gl/k7Bbv1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-26 Thread Kurt Griffiths
Team, what do you think about doing this for Marconi? It looks like we
indeed have a sample checked in:

https://review.openstack.org/#/c/83006/1/etc/marconi.conf.sample


Personally, I think we should keep the sample until generate_sample.sh
works on OS X (we could even volunteer to fix it); otherwise, people with
MBPs will be in a bit of a bind.

---
Kurt G. | @kgriffs

On 3/26/14, 1:15 PM, "Russell Bryant"  wrote:

>On 03/26/2014 02:10 PM, Clint Byrum wrote:
>> This is an issue that affects all of our git repos. If you are using
>> oslo.config, you will likely also be using the sample config generator.
>> 
>> However, for some reason we are all checking this generated file in.
>> This makes no sense, as we humans are not editting it, and it often
>> picks up config files from other things like libraries (keystoneclient
>> in particular). This has lead to breakage in the gate a few times for
>> Heat, perhaps for others as well.
>> 
>> I move that we all rm this file from our git trees, and start generating
>> it as part of the install/dist process (I have no idea how to do
>> this..). This would require:
>> 
>> - rm sample files and add them to .gitignore in all trees
>> - Removing check_uptodate.sh from all trees/tox.ini's
>> - Generating file during dist/install process.
>> 
>> Does anyone disagree?
>
>This has been done in Nova, except we don't have it generated during
>install.  We just have instructions and a tox target that will do it if
>you choose to.
>
>https://git.openstack.org/cgit/openstack/nova/tree/etc/nova/README-nova.co
>nf.txt
>
>Related, adding instructions to generate without tox:
>https://review.openstack.org/#/c/82533/
>
>-- 
>Russell Bryant
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Swift storage policies in Icehouse

2014-03-25 Thread Kurt Griffiths
> As a quick review, storage policies allow objects to be stored across a
>particular subset of hardware...and with a particular storage algorithm

Having worked on backup software in the past, this sounds interesting. :D

What is the scope of these policies? Are they per-object, per-container,
and/or per-project? Or do they not work like that?

---
Kurt G. | @kgriffs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Kurt Griffiths
+1 million. I’ve been super impressed by Malini’s work and thoughtful
comments on multiple occasions.

On 3/21/14, 10:35 AM, "Amit Gandhi"  wrote:

>+1
>
>On 3/21/14, 11:17 AM, "Flavio Percoco"  wrote:
>
>>Greetings,
>>
>>I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
>>has been an outstanding contributor for a long time. She's taken care
>>of Marconi's tests, benchmarks, gate integration, tempest support and
>>way more other things. She's also actively participated in the mailing
>>list discussions, she's contributed with thoughtful reviews and
>>participated in the project's meeting since she first joined the
>>project.
>>
>>Folks in favor or against please explicitly +1 / -1 the proposal.
>>
>>Thanks Malini, it's an honor to have you in the team.
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi][TC] Withdraw graduation request

2014-03-20 Thread Kurt Griffiths
> I'd also like to thank the team and the overall community. The team
> for its hard work during the last cycle and the community for being there
> and providing such important feedback in this process.

+1, thanks again everyone for participating in the discussions and
driving towards a constructive outcome.

Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Kurt Griffiths
> The incorporation of AGPLv3 code Into OpenStack Project  is a
>significant decision

To be clear, Marconi does not incorporate any AGPL code itself; pymongo is
Apache2 licensed.

Concerns over AGPL were raised when Marconi was incubated, and I totally
respect that some folks are not comfortable with deploying something like
MongoDB that is AGPL-licensed. Those discussions precipitated the work we
have been doing on SQLAlchemy and Redis drivers. In fact, the sqla driver
was one of the graduation requirements put in place but the TC. Now, if
people want to use something else than what the Marconi team is already
working on, we are more than happy to have them contribute and help us
evolve the driver interface as needed.

On the subject of minimizing the number of different backends that
operators need to manage, some relief may be found by making the backends
for projects more customizable. For example, as we move more projects to
using the oslo caching library, that will give operators an opportunity to
migrate from memcached to, say, Redis. And if OpenStack Service X
(Marconi, for example) supports Redis as a backing store, now the operator
can reuse their Redis infrastructure and know-how.

The software industry has been moving towards hybrid NoSQL+SQL
architectures for a long time now, in order to create best-fit solutions;
I think we’ll see more OpenStack projects following this model in the
future, not less, and so we need to work out a happy path for supporting
these kinds of operating environments.

Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Kurt Griffiths
> Only one project is using swob, and it is unlikely that will change.

That begs the question, *why* is that unlikely to change? Is it because
there are fundamental needs that are not met by Pecan? If I understand the
original charter for Oslo, it was to consolidate code already in use by
projects to facilitate sharing. It would seem to me that if swob has a
compelling reason to exist, and other data plane projects see value in it
(and I’m starting to think Marconi would be on that list), it would be a
good candidate for extraction to a standalone library. I personally see a
lot of alignment between swob and Falcon, and convergence between the two
libraries could be a productive path to explore.

Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Kurt Griffiths
Thierry Carrez wrote:

> There was historically a lot of deviation, but as we add more projects
>that deviation is becoming more costly.

I totally understand the benefits of reducing the variance between
projects, and to be sure, I am not suggesting we have 10 different
libraries to do X.  However, as more projects are added, the variety of
requirements also increases, and it becomes very difficult for a single
library to meet all the projects' needs without some projects having to
make non-trivial compromises.

One approach to this that I’ve seen work well in other communities is to
define a small set of options that cover the major use cases.

> My question would be, can Pecan be improved to also cover Marconi's use
>case ? Could we have the best of both worlds (an appropriate tool *and*
>convergence) ?

That would certainly be ideal, but as always, the devil is in the details.

Pecan performance has been improving, so on that front there may be an
opportunity for convergence (assuming webob also improves in performance).
However, with respect to code paths and dependencies, I am not clear on
the path forward. Some dependencies could be removed by creating some kind
of “pecan-light” library, but that would need to be done in a way that
does not break projects that rely on those extra features. That would
still leave webob, which is an often-used selling point for Pecan. I am
not confident that webob can be modified to address Marconi and Swift's
needs without making backwards-incompatible changes to the library which
would obviously not be acceptable to the broader Python community.


Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-18 Thread Kurt Griffiths
After reviewing the report below, I would recommend that Marconi continue using 
Falcon for the v1.1 API and then re-evaluate Pecan for v2.0 or possibly look at 
using swob.

I wanted to post my recommendation to the general list, because my request to 
continue using Falcon speaks to a broader issue. I think the community can 
agree that different projects have different needs. A good craftsman has more 
than one type of hammer in his toolbox.

Most of the OpenStack APIs to date have been control-plane APIs, not data 
plane. Swift is a notable exception. Falcon was created to address the needs of 
a high-traffic data-plane API, while Pecan’s history would suggest that it was 
conceived as a solution for building web apps and control plane APIs. Two major 
differentiators between these types of APIs:

  1.  Performance (i.e., latency, throughput, efficiency). With a data plane 
API, every ms counts, esp. when it comes to running a large cloud.
  2.  Diagnostics. When your service is piping a huge number of requests/sec, 
you become very susceptible to edge cases. Also, the amount of downtime your 
users will tolerate is quite low, since even a small hiccup means a whole lot 
of work can’t get done. Having a smaller code base, minimizing dependencies, 
and making the code that is there as straightforward, predictable and 
debuggable as possible becomes very important in situations like these.

Falcon and swob were created to address these needs, hence their use in data 
plane APIs.

I’d love to get everyone’s thoughts on the requirements of data plane APIs they 
have been involved with (not necessarily OpenStack projects).

Kurt

From: Balaji Iyer mailto:balaji.i...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 18, 2014 at 11:55 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Pecan Evaluation for Marconi

I work for Rackspace and Im fairly new to Openstack Ecosystem. Recently, I came 
across an opportunity to evaluate Pecan for Marconi and produce a comprehensive 
report. I have not worked with Pecan or Falcon prior to this evaluation, and 
have no vested interest in these two frameworks.

Evaluating frameworks is not always easy, but I have strived to cover as many 
details as applicable.  I have evaluated Pecan and Falcon only on how it fits 
Marconi and this should not be treated as a general evaluation for all 
products. It is always recommended to evaluate frameworks based on your 
product's requirements and its workload.

Benchmarking is not always easy, hence I have spent a good amount of time 
benchmarking these two frameworks using different tools and under different 
network and load conditions with Marconi. Some of the experiences I have 
mentioned in the report are very subjective and it narrates mine - you may have 
had a different experience with these frameworks, which is totally acceptable.

Full evaluation report is available here - 
https://wiki.openstack.org/wiki/Marconi/pecan-evaluation

Thought of sharing this with the community in the hope that someone may find 
this useful.

Thanks,
Balaji Iyer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-18 Thread Kurt Griffiths
I think we can agree that a data-plane API only makes sense if it is
useful to a large number of web and mobile developers deploying their apps
on OpenStack. Also, it only makes sense if it is cost-effective and
scalable for operators who wish to deploy such a service.

Marconi was born of practical experience and direct interaction with
prospective users. When Marconi was kicked off a few summits ago, the
community was looking for a multi-tenant messaging service to round out
the OpenStack portfolio. Users were asking operators for something easier
to work with and more web-friendly than established options such as AMQP.

To that end, we started drafting an HTTP-based API specification that
would afford several different messaging patterns, in order to support the
use cases that users were bringing to the table. We did this completely in
the open, and received lots of input from prospective users familiar with
a variety of message broker solutions, including more “cloudy” ones like
SQS and Iron.io.

The resulting design was a hybrid that supported what you might call
“claim-based” semantics ala SQS and feed-based semantics ala RSS.
Application developers liked the idea of being able to use one or the
other, or combine them to come up with new patterns according to their
needs. For example:

1. A video app can use Marconi to feed a worker pool of transcoders. When
a video is uploaded, it is stored in Swift and a job message is posted to
Marconi. Then, a worker claims the job and begins work on it. If the
worker crashes, the claim expires and the message becomes available to be
claimed by a different worker. Once the worker is finished with the job,
it deletes the message so that another worker will not process it, and
claims another message. Note that workers never “list” messages in this
use case; those endpoints in the API are simply ignored.

2. A backup service can use Marconi to communicate with hundreds of
thousands of backup agents running on customers' machines. Since Marconi
queues are extremely light-weight, the service can create a different
queue for each agent, and additional queues to broadcast messages to all
the agents associated with a single customer. In this last scenario, the
service would post a message to a single queue and the agents would simply
list the messages on that queue, and everyone would get the same message.
This messaging pattern is emergent, and requires no special routing setup
in advance from one queue to another.

3. A metering service for an Internet application can use Marconi to
aggregate usage data from a number of web heads. Each web head collects
several minutes of data, then posts it to Marconi. A worker periodically
claims the messages off the queue, performs the final aggregation and
processing, and stores the results in a DB. So far, this messaging pattern
is very much like example #1, above. However, since Marconi’s API also
affords the observer pattern via listing semantics, the metering service
could run an auditor that logs the messages as they go through the queue
in order to provide extremely valuable data for diagnosing problems in the
aggregated data.

Users are excited about what Marconi offers today, and we are continuing
to evolve the API based on their feedback.

Of course, app developers aren’t the only audience Marconi needs to serve.
Operators want something that is cost-effective, scales, and is
customizable for the unique needs of their target market.

While Marconi has plenty of room to improve (who doesn’t?), here is where
the project currently stands in these areas:

1. Customizable. Marconi transport and storage drivers can be swapped out,
and messages can be manipulated in-flight with custom filter drivers.
Currently we have MongoDB and SQLAlchemy drivers, and are exploring Redis
and AMQP brokers. Now, the v1.0 API does impose some constraints on the
backend in order to support the use cases mentioned earlier. For example,
an AMQP backend would only be able to support a subset of the current API.
Operators occasionally ask about AMQP broker support, in particular, and
we are exploring ways to evolve the API in order to support that.

2. Scalable. Operators can use Marconi’s HTTP transport to leverage their
existing infrastructure and expertise in scaling out web heads. When it
comes to the backend, for small deployments with minimal throughput needs,
we are providing a SQLAlchemy driver as a non-AGPL alternative to MongoDB.
For large-scale production deployments, we currently provide the MongoDB
driver and will likely add Redis as another option (there is already a POC
driver). And, of course, operators can provide drivers for NewSQL
databases, such as VelocityDB, that are very fast and scale extremely
well. In Marconi, every queue can be associated with a different backend
cluster. This allows operators to scale both up and out, according to what
is most cost-effective for them. Marconi's app-level sharding is currently
done using a lo

Re: [openstack-dev] [Marconi] Pecan Evaluation for Marconi

2014-03-18 Thread Kurt Griffiths
Kudos to Balaji for working so hard on this. I really appreciate his candid 
feedback on both frameworks.

After reviewing his report, I would recommend that Marconi continue using 
Falcon for the v1.1 API and then re-evaluate Pecan for v2.0. Pecan will 
continue to improve over time. We should also look for opportunities to 
contribute to the Pecan ecosystem.

Kurt G. | @kgriffs | Marconi PTL

From: Balaji Iyer mailto:balaji.i...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 18, 2014 at 11:55 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Pecan Evaluation for Marconi

I work for Rackspace and Im fairly new to Openstack Ecosystem. Recently, I came 
across an opportunity to evaluate Pecan for Marconi and produce a comprehensive 
report. I have not worked with Pecan or Falcon prior to this evaluation, and 
have no vested interest in these two frameworks.

Evaluating frameworks is not always easy, but I have strived to cover as many 
details as applicable.  I have evaluated Pecan and Falcon only on how it fits 
Marconi and this should not be treated as a general evaluation for all 
products. It is always recommended to evaluate frameworks based on your 
product's requirements and its workload.

Benchmarking is not always easy, hence I have spent a good amount of time 
benchmarking these two frameworks using different tools and under different 
network and load conditions with Marconi. Some of the experiences I have 
mentioned in the report are very subjective and it narrates mine - you may have 
had a different experience with these frameworks, which is totally acceptable.

Full evaluation report is available here - 
https://wiki.openstack.org/wiki/Marconi/pecan-evaluation

Thought of sharing this with the community in the hope that someone may find 
this useful.

Thanks,
Balaji Iyer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed Core Reviewer Changes

2014-03-18 Thread Kurt Griffiths
+1

On 3/18/14, 12:13 AM, "Adrian Otto"  wrote:

>Solum Cores,
>
>I propose the following changes to the Solum core reviewer team:
>
>+gokrokve
>+julienvey
>+devdatta-kulkarni
>-kgriffs (inactivity)
>-russelb (inactivity)
>
>Please reply with your +1 votes to proceed with this change, or any
>remarks to the contrary.
>
>Thanks,
>
>Adrian
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Constructive Conversations

2014-03-07 Thread Kurt Griffiths
Folks,

I’m sure that I’m not the first person to bring this up, but I’d like to get 
everyone’s thoughts on what concrete actions we, as a community, can take to 
improve the status quo.

There have been a variety of instances where community members have expressed 
their ideas and concerns via email or at a summit, or simply submitted a patch 
that perhaps challenges someone’s opinion of The Right Way to Do It, and 
responses to that person have been far less constructive than they could have 
been[1]. In an open community, I don’t expect every person who comments on a ML 
post or a patch to be congenial, but I do expect community leaders to lead by 
example when it comes to creating an environment where every person’s voice is 
valued and respected.

What if every time someone shared an idea, they could do so without fear of 
backlash and bullying? What if people could raise their concerns without being 
summarily dismissed? What if “seeking first to understand”[2] were a core value 
in our culture? It would not only accelerate our pace of innovation, but also 
help us better understand the needs of our cloud users, helping ensure we 
aren’t just building OpenStack in the right way, but also building the right 
OpenStack.

We need open minds to build an open cloud.

Many times, we do have wonderful, constructive discussions, but the times we 
don’t cause wounds in the community that take a long time to heal. 
Psychologists tell us that it takes a lot of good experiences to make up for 
one bad one. I will be the first to admit I’m not perfect. Communication is 
hard. But I’m convinced we can do better. We must do better.

How can we build on what is already working, and make the bad experiences as 
rare as possible?

A few ideas to seed the discussion:

  *   Identify a set of core values that the community already embraces for the 
most part, and put them down “on paper.”[3] Leaders can keep these values fresh 
in everyone’s minds by (1) leading by example, and (2) referring to them 
regularly in conversations and talks.
  *   PTLs can add mentoring skills and a mindset of "seeking first to 
understand” to their list of criteria for evaluating proposals to add a 
community member to a core team.
  *   Get people together in person, early and often. Mid-cycle meetups and 
mini-summits provide much higher-resolution communication channels than email 
and IRC, and are great ways to clear up misunderstandings, build relationships 
of trust, and generally get everyone pulling in the same direction.

What else can we do?

Kurt

[1] There are plenty of examples, going back years. Anyone who has been in the 
community very long will be able to recall some to mind. Recent ones I thought 
of include Barbican’s initial request for incubation on the ML, dismissive and 
disrespectful exchanges in some of the design sessions in Hong Kong (bordering 
on personal attacks), and the occasional “WTF?! This is the dumbest idea ever!” 
patch comment.
[2] https://www.stephencovey.com/7habits/7habits-habit5.php
[3] We already have a code of 
conduct but I think 
a list of core values would be easier to remember and allude to in day-to-day 
discussions. I’m trying to think of ways to make this idea practical. We need 
to stand up for our values, not just say we have them.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-06 Thread Kurt Griffiths
> The fact is though that Freenode has had significant service degradation
>due to DDoS attacks for quite some time

Rather than jumping ship, is there anything we as a community can do to
help Freenode? This would obviously require a commitment of time/money,
but it could be worth it for something we rely on so heavily.

-Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] graduation review meeting

2014-03-06 Thread Kurt Griffiths
Team, we will be discussing Marconi graduation from incubation in a couple 
weeks at the TC meeting, March 18th at 20:00 
UTC.

It would be great to have as many people there as possible to help answer 
questions, etc.

Thanks!

Kurt G. | @kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-04 Thread Kurt Griffiths
The poll has closed. flwang has been promoted to Marconi core.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] integration with logstash

2014-03-03 Thread Kurt Griffiths
Here’s an interesting hack. People are getting creative in the way they use 
Marconi (this patch uses Rackspace’s deployment of Marconi).

https://github.com/paulczar/logstash-contrib/commit/8bfe93caf1c66d94690e9d9c2ecf9ee6b458b1d9

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-03 Thread Kurt Griffiths
Hi folks, I’d like to propose adding Fei Long Wang (flwang) as a core reviewer 
on the Marconi team. He has been contributing regularly over the past couple of 
months, and has proven to be a careful reviewer with good judgment.

All Marconi ATC’s, please respond with a +1 or –1.

Cheers,
Kurt G. | @kgriffs | Marconi PTL
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Kurt Griffiths
+1

On 1/27/14, 11:54 AM, "Monty Taylor"  wrote:

>On 01/24/2014 05:32 PM, Adrian Otto wrote:
>> Solum Core Reviewers,
>>
>> I propose the following changes to solum-core:
>>
>> +asalkeld
>> +noorul
>> -mordred
>>
>> Thanks very much to mordred for helping me to bootstrap the reviewer
>>team. Please reply with your votes.
>
>+1
>
>My pleasure - you guys seem like you're off to the races -a nd asalkeld
>and noorul are both doing great.
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-17 Thread Kurt Griffiths
FWIW, I believe Nova is looking at using JSON Schema as well, since they
need to handle API extensions. This came up during a design session at the
HK summit.

On 1/12/14, 5:33 PM, "Jamie Lennox"  wrote:

>I would prefer not to have keystone using yet another framework from the
>rest of openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Ichouse roadmap and Graduation tracking

2014-01-10 Thread Kurt Griffiths
Hi folks, I put together a tracking blueprint for us to refer to in our team 
meetings:

https://blueprints.launchpad.net/marconi/+spec/graduation

Also, here is an outline of what I want to a accomplish for Icehouse:

https://wiki.openstack.org/wiki/Marconi/roadmaps/icehouse

Feedback is welcome, as always.

Cheers,
@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Kurt Griffiths
Yeah, that could work. The main thing is to try and keep policy control in one 
place if you can rather than sprinkling it all over the place.

From: Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, January 8, 2014 at 10:41 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy

Hi Kurt,

As for WSGI middleware I think about Pecan hooks which can be added before 
actual controller call. Here is an example how we added a hook for keystone 
information collection: 
https://review.openstack.org/#/c/64458/4/solum/api/auth.py

What do you think, will this approach with Pecan hooks work?

Thanks
Georgy


On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>> wrote:
You might also consider doing this in WSGI middleware:

Pros:

  *   Consolidates policy code in once place, making it easier to audit and 
maintain
  *   Simple to turn policy on/off – just don’t insert the middleware when off!
  *   Does not preclude the use of oslo.policy for rule checking
  *   Blocks unauthorized requests before they have a chance to touch the web 
framework or app. This reduces your attack surface and can improve performance  
 (since the web framework has yet to parse the request).

Cons:

  *   Doesn't work for policies that require knowledge that isn’t available 
this early in the pipeline (without having to duplicate a lot of code)
  *   You have to parse the WSGI environ dict yourself (this may not be a big 
deal, depending on how much knowledge you need to glean in order to enforce the 
policy).
  *   You have to keep your HTTP path matching in sync with with your route 
definitions in the code. If you have full test coverage, you will know when you 
get out of sync. That being said, API routes tend to be quite stable in 
relation to to other parts of the code implementation once you have settled on 
your API spec.

I’m sure there are other pros and cons I missed, but you can make your own best 
judgement whether this option makes sense in Solum’s case.

From: Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, January 7, 2014 at 6:54 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy




On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>> wrote:
Hi Dough,

Thank you for pointing to this code. As I see you use OpenStack policy 
framework but not Pecan security features. How do you implement fine grain 
access control like user allowed to read only, writers and admins. Can you 
block part of API methods for specific user like access to create methods for 
specific user role?

The policy enforcement isn't simple on/off switching in ceilometer, so we're 
using the policy framework calls in a couple of places within our API code 
(look through v2.py for examples). As a result, we didn't need to build much on 
top of the existing policy module to interface with pecan.

For your needs, it shouldn't be difficult to create a couple of decorators to 
combine with pecan's hook framework to enforce the policy, which might be less 
complex than trying to match the operating model of the policy system to 
pecan's security framework.

This is the sort of thing that should probably go through Oslo and be shared, 
so please consider contributing to the incubator when you have something 
working.

Doug



Thanks
Georgy


On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>> wrote:



On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>> wrote:
Hi,

In Solum project we will need to implement security and ACL for Solum API. 
Currently we use Pecan framework for API. Pecan has its own security model 
based on SecureController class. At the same time OpenStack widely uses policy 
mechanism which uses json files to control access to specific API methods.

I wonder if someone has any experience with implementing security and ACL stuff 
with using Pecan framework. What is the right way to provide security for API?

In ceilometer we are using the keystone middleware and the policy framework to 
manage arguments that constrain the queries handled by the storage layer.

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py

and

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337

Doug



Thanks
Georgy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:Op

Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-07 Thread Kurt Griffiths
You might also consider doing this in WSGI middleware:

Pros:

  *   Consolidates policy code in once place, making it easier to audit and 
maintain
  *   Simple to turn policy on/off – just don’t insert the middleware when off!
  *   Does not preclude the use of oslo.policy for rule checking
  *   Blocks unauthorized requests before they have a chance to touch the web 
framework or app. This reduces your attack surface and can improve performance  
 (since the web framework has yet to parse the request).

Cons:

  *   Doesn't work for policies that require knowledge that isn’t available 
this early in the pipeline (without having to duplicate a lot of code)
  *   You have to parse the WSGI environ dict yourself (this may not be a big 
deal, depending on how much knowledge you need to glean in order to enforce the 
policy).
  *   You have to keep your HTTP path matching in sync with with your route 
definitions in the code. If you have full test coverage, you will know when you 
get out of sync. That being said, API routes tend to be quite stable in 
relation to to other parts of the code implementation once you have settled on 
your API spec.

I’m sure there are other pros and cons I missed, but you can make your own best 
judgement whether this option makes sense in Solum’s case.

From: Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, January 7, 2014 at 6:54 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy




On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>> wrote:
Hi Dough,

Thank you for pointing to this code. As I see you use OpenStack policy 
framework but not Pecan security features. How do you implement fine grain 
access control like user allowed to read only, writers and admins. Can you 
block part of API methods for specific user like access to create methods for 
specific user role?

The policy enforcement isn't simple on/off switching in ceilometer, so we're 
using the policy framework calls in a couple of places within our API code 
(look through v2.py for examples). As a result, we didn't need to build much on 
top of the existing policy module to interface with pecan.

For your needs, it shouldn't be difficult to create a couple of decorators to 
combine with pecan's hook framework to enforce the policy, which might be less 
complex than trying to match the operating model of the policy system to 
pecan's security framework.

This is the sort of thing that should probably go through Oslo and be shared, 
so please consider contributing to the incubator when you have something 
working.

Doug



Thanks
Georgy


On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>> wrote:



On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>> wrote:
Hi,

In Solum project we will need to implement security and ACL for Solum API. 
Currently we use Pecan framework for API. Pecan has its own security model 
based on SecureController class. At the same time OpenStack widely uses policy 
mechanism which uses json files to control access to specific API methods.

I wonder if someone has any experience with implementing security and ACL stuff 
with using Pecan framework. What is the right way to provide security for API?

In ceilometer we are using the keystone middleware and the policy framework to 
manage arguments that constrain the queries handled by the storage layer.

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py

and

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337

Doug



Thanks
Georgy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Meeting agenda for tomorrow at 1500 UTC

2013-12-16 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-alt on 
Tuesdays, 1500 
UTC.

The next meeting is Tomorrow, Dec. 10. Everyone is welcome, but please take a 
minute to review the wiki before attending for the first time:

http://wiki.openstack.org/marconi

This week we will discuss progress made toward graduation, and should have some 
time to triage new bugs/bps.


  |""-..._
  '-._"'`|
  \  ``` ``"---... _ |
  |  /  /#\
  }--..__..-{   ###
 } _   _ {
   6   6  
{^}
   {{\  -=-  /}}
   {{{;.___.;}}}
{{{)   (}}}'
 `""'"':   :'"'"'`
 after/jgs  `@`



Proposed Agenda:

  *   Review actions from last time
  *   Review Graduation BPs/Bugs
  *   Updates on bugs
  *   Updates on blueprints
  *   SQLAlchemy storage driver strategy
  *   Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and note your 
IRC name so we can call on you during the meeting:

http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Kurt Griffiths
FWIW, Marconi can easily deliver sub-second latency even with lots of
clients fast-polling. We are also considering a long-polling feature that
will reduce latency further for HTTP clients.

On 12/13/13, 2:41 PM, "Fox, Kevin M"  wrote:

>A second or two of latency perhaps shaved off by using something like
>AMQP or STOMP might not be justified for its added complexity.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-09 Thread Kurt Griffiths
I love the idea of treating usability as a first-class citizen; to do that, we 
definitely need a core set of people who are passionate about the topic in 
order to keep it alive in the OpenStack gestalt. Contributors tend to 
prioritize work on new, concrete features over “non-functional” requirements 
that are perceived as tedious and/or abstract. Common (conscious and 
unconcious) rationalizations include:

  *   I don’t have time
  *   It’s too hard
  *   I don’t know how

Over time, I think we as OpenStack should strive toward a rough consensus on 
basic UX tenets, similar to what we have wrt architecture (i.e., Basic Design 
Tenets). PTLs should 
champion these tenets within their respective teams, mentoring individual 
members on the why and how, and be willing to occasionally postpone sexy new 
features, in order to free the requisite bandwidth for making OpenStack more 
pleasant to use.

IMO, our initiatives around security, usability, documentation, testing etc. 
will only succeed inasmuch as we make them a part of our culture and identity.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Kurt Griffiths
This list of features makes me very nervous from a security standpoint. Are we 
talking about giving an agent an arbitrary shell command or file to install, 
and it goes and does that, or are we simply triggering a preconfigured action 
(at the time the agent itself was installed)?

From: Steven Dake mailto:sd...@redhat.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, December 9, 2013 at 11:41 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] Unified Guest Agent proposal

In terms of features:
* run shell commands
* install files (with selinux properties as well)
* create users and groups (with selinux properties as well)
* install packages via yum, apt-get, rpm, pypi
* start and enable system services for systemd or sysvinit
* Install and unpack source tarballs
* run scripts
* Allow grouping, selection, and ordering of all of the above operations
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Team meeting agenda for tomorrow @ 1500 UTC

2013-12-09 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-alt
on Tuesdays, 1500 
UTC.

The next meeting is Tomorrow, Dec. 10. Everyone is welcome, but please
take a minute to review the wiki before attending for the first time:

http://wiki.openstack.org/marconi

This week we will be cleaning up our bps and bugs, making sure we have
everything scheduled appropriately.

 .-.
o   \ .-.
   ..'   \
 .'o)  / `.   o
/ |
\_)   /-.
  '_.`\  \
   `.  |  \
|   \ |
.--/`-. / /
  .'.-/`-. `.  .\|
 /.' /`._ `-'-.
(|__/`-..`-   '-._ \
   |`--.'-._ `  ||\ \
   || #   /-.   `   /   || \|
   ||   #/   `--'  /  /_::_|)__
   `||-._.-`  /  ||``
 \-.___.` | / || #  |
  \   | | ||   #  # |
  /`.___.'\ |.`||
  | /`.__.'|'.`
__/ \__/ \
   /__.-.)  /__.-.) LGB

Proposed Agenda:

  *   Review actions from last time
  *   Review Graduation BPs/Bugs
  *   Updates on bugs
  *   Updates on blueprints
  *   Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and
note your IRC name so we can call on you during the meeting:

http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-06 Thread Kurt Griffiths
That’s a good question. IMO, this is an important use case, and should be 
considered within scope of the project.

Rackspace uses a precursor to Marconi for its Cloud Backup product, and it has 
worked out well for showing semi-realtime updates, e.g., progress on an active 
backup jobs. We have a large number of backup agents posting events at any 
given time. The web-based control panel polls every few seconds for updates, 
but the message service was optimized for frequent, low-traffic requests like 
that, so it hasn’t been a real problem.

I’ve tried to promote a performance-oriented mindset from the beginning of the 
Marconi project, and I would like to give a shout-out to the team for the fine 
work they’ve done in this area to date; queues scale quite well, and benchmarks 
have shown promising throughput and latency numbers that will only improve as 
we continue to tune the existing code (and add transport and storage drivers 
designed for ultra-high-throughput use cases).

That being said, we definitely need to consider the load on the various 
OpenStack components, themselves, for generating events (i.e., pushing events 
to a queue). I would love to learn more about the requirements of individual 
project teams in this respect (those who are interested in surfacing events to 
end users).

From: Ian Wells mailto:ijw.ubu...@cack.org.uk>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, December 4, 2013 at 8:30 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming 
session tomorrow @ 1500 UTC

How frequent do you imagine these notifications being?  There's a wide 
variation here between the 'blue moon' case where disk space is low and 
frequent notifications of things like OS performance, which you might want to 
display in Horizon or another monitoring tool on an every-few-seconds basis, or 
instance state change, which is usually driven by polling at present.

I'm not saying that we should necessarily design notifications for the latter 
cases, because it introduces potentially quite a lot of user-demanded load on 
the Openstack components, I'm just asking for a statement of intent.
--
Ian.


On 4 December 2013 16:09, Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>> wrote:
Thanks! We touched on this briefly during the chat yesterday, and I will
make sure it gets further attention.

On 12/3/13, 3:54 AM, "Julien Danjou" 
mailto:jul...@danjou.info>> wrote:

>On Mon, Dec 02 2013, Kurt Griffiths wrote:
>
>> Following up on some conversations we had at the summit, I¹d like to get
>> folks together on IRC tomorrow to crystalize the design for a
>>notifications
>> project under the Marconi program. The project¹s goal is to create a
>>service
>> for surfacing events to end users (where a user can be a cloud app
>> developer, or a customer using one of those apps). For example, a
>>developer
>> may want to be notified when one of their servers is low on disk space.
>> Alternatively, a user of MyHipsterApp may want to get a text when one of
>> their friends invites them to listen to That Band You¹ve Never Heard Of.
>>
>> Interested? Please join me and other members of the Marconi team
>>tomorrow,
>> Dec. 3rd, for a brainstorming session in #openstack-marconi at 1500
>>
>>UTC<http://www.timeanddate.com/worldclock/fixedtime.html?hour=15&min=0&se
>>c=0>.
>> Your contributions are crucial to making this project awesome.
>>
>> I¹ve seeded an etherpad for the discussion:
>>
>> https://etherpad.openstack.org/p/marconi-notifications-brainstorm
>
>This might (partially) overlap with what Ceilometer is doing with its
>alarming feature, and one of the blueprint our roadmap for Icehouse:
>
>  https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification
>
>While it doesn't solve the use case at the same level, the technical
>mechanism is likely to be similar.
>
>--
>Julien Danjou
># Free Software hacker # independent consultant
># http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] New meeting time

2013-12-04 Thread Kurt Griffiths
Sorry to change things up again, but it’s been requested that we move our
meeting to Tuesday at 1500 UTC instead of Monday, since a lot of people’s
Mondays are crazy busy as it is. Unless anyone objects, let’s plan on
doing that starting with our next meeting (Dec 10).

On 11/25/13, 6:22 PM, "Kurt Griffiths" 
wrote:

>OK, I¹ve changed the time. Starting next Monday (2 Dec.) we will be
>meeting at 1500 UTC in #openstack-meeting-alt.
>
>See also: https://wiki.openstack.org/wiki/Meetings/Marconi
>
>On 11/25/13, 11:33 AM, "Flavio Percoco"  wrote:
>
>>On 25/11/13 17:05 +, Amit Gandhi wrote:
>>>Works for me.
>>
>>Works for me!
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-04 Thread Kurt Griffiths
Thanks! We touched on this briefly during the chat yesterday, and I will
make sure it gets further attention.

On 12/3/13, 3:54 AM, "Julien Danjou"  wrote:

>On Mon, Dec 02 2013, Kurt Griffiths wrote:
>
>> Following up on some conversations we had at the summit, I¹d like to get
>> folks together on IRC tomorrow to crystalize the design for a
>>notifications
>> project under the Marconi program. The project¹s goal is to create a
>>service
>> for surfacing events to end users (where a user can be a cloud app
>> developer, or a customer using one of those apps). For example, a
>>developer
>> may want to be notified when one of their servers is low on disk space.
>> Alternatively, a user of MyHipsterApp may want to get a text when one of
>> their friends invites them to listen to That Band You¹ve Never Heard Of.
>>
>> Interested? Please join me and other members of the Marconi team
>>tomorrow,
>> Dec. 3rd, for a brainstorming session in #openstack-marconi at 1500
>> 
>>UTC<http://www.timeanddate.com/worldclock/fixedtime.html?hour=15&min=0&se
>>c=0>.
>> Your contributions are crucial to making this project awesome.
>>
>> I¹ve seeded an etherpad for the discussion:
>>
>> https://etherpad.openstack.org/p/marconi-notifications-brainstorm
>
>This might (partially) overlap with what Ceilometer is doing with its
>alarming feature, and one of the blueprint our roadmap for Icehouse:
>
>  https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification
>
>While it doesn't solve the use case at the same level, the technical
>mechanism is likely to be similar.
>
>-- 
>Julien Danjou
># Free Software hacker # independent consultant
># http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-02 Thread Kurt Griffiths
Folks,

Want to surface events to end users?

Following up on some conversations we had at the summit, I’d like to get folks 
together on IRC tomorrow to crystalize the design for a notifications project 
under the Marconi program. The project’s goal is to create a service for 
surfacing events to end users (where a user can be a cloud app developer, or a 
customer using one of those apps). For example, a developer may want to be 
notified when one of their servers is low on disk space. Alternatively, a user 
of MyHipsterApp may want to get a text when one of their friends invites them 
to listen to That Band You’ve Never Heard Of.

Interested? Please join me and other members of the Marconi team tomorrow, Dec. 
3rd, for a brainstorming session in #openstack-marconi at 1500 
UTC. 
Your contributions are crucial to making this project awesome.

I’ve seeded an etherpad for the discussion:

https://etherpad.openstack.org/p/marconi-notifications-brainstorm

@kgriffs




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] New meeting time

2013-11-25 Thread Kurt Griffiths
OK, I¹ve changed the time. Starting next Monday (2 Dec.) we will be
meeting at 1500 UTC in #openstack-meeting-alt.

See also: https://wiki.openstack.org/wiki/Meetings/Marconi

On 11/25/13, 11:33 AM, "Flavio Percoco"  wrote:

>On 25/11/13 17:05 +, Amit Gandhi wrote:
>>Works for me.
>
>Works for me!
>
>-- 
>@flaper87
>Flavio Percoco


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] New meeting time

2013-11-25 Thread Kurt Griffiths
Hi folks,

To make it easier on our friends joining the project from across the world, I’d 
like to propose moving our weekly meeting time back one hour to 1500 UTC:

http://www.timeanddate.com/worldclock/fixedtime.html?hour=15&min=0&sec=0

Any objections or alternate suggestions?

---
@kgriffs
Kurt Griffiths
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Websocket transport

2013-11-14 Thread Kurt Griffiths
Hi folks,

We have been discussing doing at least a basic/poc TCP transport as a way to 
keep ourselves honest wrt the API abstraction work that was recently started. I 
would like to propose using websockets for 
this work, for a few reasons:

  1.  It saves us from having to invent our own, non-standard framing protocol.
  2.  It already defines push semantics that we could leverage later for 
streaming notifications
  3.  It would lay the groundwork for investigating the viability and security 
considerations around allowing browser-based apps to interact directly with 
queues

Thoughts?

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-13 Thread Kurt Griffiths
> Also does it place a requirement that all projects wanting to request
>incubation to be placed in stackforge ? That sounds like a harsh
>requirement if we were to reject them.

I think that anything that encourages projects to get used to the
OpenStack development process sooner, rather than later, is a good thing.

If becoming incubated does not require the team to have a track record of
proper formatting of commit messages, using Gerrit for reviews, use of
Launchpad for bugs and blueprints, etc., we are setting ourselves up for a
lot of pain. Being lax on incubation will only encourage teams to code in
a slapdash, ³under the radar² fashion. IMO, it also makes it harder for
the TC to evaluate the team and project, since there are fewer artifacts
they can use as data points.

Perhaps it was overkill, but the incubation requirements (somewhat
self-imposed) for the Marconi team included:

1. Code against stackforge, follow the standard review process for patches
via Gerrit and at least 2 x +2s before approving patches, and require
OpenStack-standard commit messages.
2. Use Launchpad and the OpenStack wiki for specs and project management
(blueprints, blug tracking, milestones).
3. Hold regular team meetings in #openstack-meeting(-alt), following the
standard process there (i.g., using meetbot, publishing agenda on wiki and
mailing list before each meeting, archiving meeting notes on the wiki)
4. Create a comprehensive unit test suite.
5. Define and enforce a HACKING guide based on standards culled from
upstream OpenStack projects.
6. Demonstrate a pattern of consistent contribution (both code and
design) from multiple organizations/vendors
7. Solicit code reviews from TC members, address feedback to their
Satisfaction.
8. Solicit community feedback on the project¹s features, code, and overall
design--both early and often.

I guess I view the pre-incubation time period first of all as a ³practice
period² for the team to get used to developing things *together*,
engendering esprit de corps across company/vendor boundaries, and getting
used to the standard OpenStack tools and processes for those team members
not used to them already.

Second, pre-incubation is a time for getting the implementation up to
snuff so that during incubation you can focus on polishing off rough
edges and integrating with upstream projects. Incubation is probably not
the time to be rewriting massive amounts of code and/or redesigning your
API; otherwise you create a moving target for everyone involved.


> This is looking at raising the bar quite a bit along the way.

+1. I like the idea of an intermediary ³emerging² stage to help crystalize
what teams need to do in order to prepare for incubation, and to help
smooth the transition from bootstrapped ---> incubated ---> integrated.

@kgriffs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] No team meeting on Monday

2013-11-01 Thread Kurt Griffiths
No meeting on Monday due to the summit.

Cheers,

---
@kgriffs
Kurt Griffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Agenda for Monday's meeting

2013-10-25 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-
alt on Mondays, 1600 UTC.

  http://goo.gl/Li9V4o

The next meeting is Monday, Oct 28. Everyone is welcome, but please
take a minute to review the wiki before attending for the first time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Updates on sharding
  * Updates on bugs
  * Triage, update blueprints
  * Review API v1 feedback
  * Suggestions for things to talk about in HGK
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and
note your IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Sharding feature design

2013-10-24 Thread Kurt Griffiths
Folks,

We’ve been discussing Marconi’s storage sharding architecture over the past 
couple of weeks and I took some time today to draw up our current thinking on 
the design. I’ve also added this link to the blueprint.

http://grab.by/rsoI

I’d like to get the team’s thoughts on this and get the design finalized.

See also: https://blueprints.launchpad.net/marconi/+spec/storage-sharding

Thanks!

---
@kgriffs
Kurt Griffiths
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Minutes from today's meeting

2013-10-21 Thread Kurt Griffiths
Folks,

Today the Marconi team held their regularly scheduled meeting in 
#openstack-meeting-alt @ 1600 UTC. We discussed progress on the new storage 
sharding feature which will let Marconi scale to very large deployments, and 
provide a solid foundation for implementing queue "flavors" depending on 
community demand for such a feature.

The team also discussed versioning, and it was determined to target a v1.1 
release of the API for the Icehouse integrated release. We also discussed the 
possibility of using extensions to prototype v2 features in the longer term, 
which would have the nice side-effect of opening up Marconi to vendors for 
customization.

Summary: http://goo.gl/2jxevN
Log: http://goo.gl/QQYrPx

Please join the conversation in #openstack-marconi, and help define the future 
of the OpenStack Queue Service.

---
@kgriffs
Kurt Griffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Agenda for next team meeting on Monday at 1600 UTC

2013-10-18 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-
alt on Mondays, 1600 UTC.

The next meeting is Monday, Oct 21. Everyone is welcome, but please
take a minute to review the wiki before attending for the first time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Updates on Sharding
  * The future of the proxy
  * API Spec server side
  * API versioning strategy?
  * Updates on bugs
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and
note your IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] API finalized, graduation progress, and other notes from today's team meeting

2013-10-07 Thread Kurt Griffiths
Hi folks,

Today we finalized the v1 API, and voted to freeze it. This day has been a LONG 
time coming, and I want to personally thank the many people who have provided 
feedback and suggestions over the past 10 months on the many drafts the API has 
gone through.

I understand that several client libraries are already under development, and 
want to thank the developers working on those for their patience as we have 
worked toward stabilizing the API.

The spec can be found here:

  *   https://wiki.openstack.org/wiki/Marconi/specs/api/v1

Notes from the meeting:

  *   
Summary
  *   
Log

During the meeting we also discussed progress on our graduation To Do list. You 
can track our project here:

  *   https://wiki.openstack.org/wiki/Marconi/Incubation/Graduation

Cheers,
Kurt G. (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] API Finalization starting in 10 minutes

2013-10-07 Thread Kurt Griffiths
Hi folks,

Sorry for the late notice. During today's regular team meeting we will be 
reviewing and freezing the v1 API. I hope to see you there!

1600 UTC @ #openstack-meeting-alt

Cheers,
Kurt G. (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >