Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-04 Thread Eric Windisch
>
>
>>I think this is perfectly fine, as long as it's reasonably large and
>> the algorithm is sufficiently intelligent. The UUID algorithm is good at
>> this, for instance, although it fails at readability. Docker's is not
>> terribly great and could be limiting if you were looking to run several
>> thousand containers on a single machine. Something better than Docker's
>> algorithm but more readable than UUID could be explored.
>>
>>  Also, something to consider is if this should also mean a change to the
>> UUIDs themselves. You could use UUID-5 to create a UUID from your tenant's
>> UUID and your unique name. The tenant's UUID would be the namespace, with
>> the bay's name being the "name" field. The benefit of this is that clients,
>> by knowing their tenant ID could automatically determine their bay ID,
>> while also guaranteeing uniqueness (or as unique as UUID gets, anyway).
>>
>>
>>  Cool idea!
>>
> I'm clear with the solution, but still have some questions: So we need to
> set the bay/baymodel name in the format of UUID-name format? Then if we get
> the tenant ID, we can use "magnum bay-list | grep " or some
> other filter logic to get all the bays belong to the tenant?  By default,
> the "magnum bay-list/baymodel-list" will only show the bay/baymodels for
> one specified tenant.
>

The name would be an arbitrary string, but you would also have a
unique-identifier which is a UUID. I'm proposing the UUID could be
generated using the UUID5 algorithm which is basically sha1(tenant_id +
unique_name)  converted into a GUID. The Python uuid library can do this
easily, out of the box.

Taking from the dev-quickstart, I've changed the instructions for creating
a container according to how this could work using uuid5:

$ magnum create-bay --name swarmbay --baymodel testbaymodel
$  BAY_UUID=$(python -c "import uuid; print
uuid.uuid5(uuid.UUID('urn:uuid:${TENANT_ID}'), 'swarmbay')")
$ cat > ~/container.json << END
{
"bay_uuid": "$BAY_UUID",
"name": "test-container",
    "image_id": "cirros",
"command": "ping -c 4 8.8.8.8"
}
END
$ magnum container-create < ~/container.json


The key difference in this example, of course, is that users would not need
to contact the server using bay-show in order to obtain the UUID of their
bay.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Eric Windisch
On Tue, Jun 2, 2015 at 10:29 PM, Adrian Otto 
wrote:

>  I have reflected on this further and offer this suggestion:
>
>  1) Add a feature to Magnum to auto-generate human readable names, like
> Docker does for un-named containers, and ElasticSearch does for naming
> cluster nodes. Use this feature if no name is specified upon the creation
> of a Bay or Baymodel.
>

For what it's worth, I also believe that requiring manual specification of
names, especially if they must be unique is an anti-pattern.

If auto-generation of human readable names is performed and these must be
unique, mind that you will be accepting a limit on the number of bays that
may be created. I think this is perfectly fine, as long as it's reasonably
large and the algorithm is sufficiently intelligent. The UUID algorithm is
good at this, for instance, although it fails at readability. Docker's is
not terribly great and could be limiting if you were looking to run several
thousand containers on a single machine. Something better than Docker's
algorithm but more readable than UUID could be explored.

Also, something to consider is if this should also mean a change to the
UUIDs themselves. You could use UUID-5 to create a UUID from your tenant's
UUID and your unique name. The tenant's UUID would be the namespace, with
the bay's name being the "name" field. The benefit of this is that clients,
by knowing their tenant ID could automatically determine their bay ID,
while also guaranteeing uniqueness (or as unique as UUID gets, anyway).

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-11 Thread Eric Windisch
On Mon, May 11, 2015 at 10:06 AM, Dan Smith  wrote:

> > +1 Agreed nested containers are a thing. Its a great reason to keep
> > our LXC driver.
>
> I don't think that's a reason we should keep our LXC driver, because you
> can still run containers in containers with other things. If anything,
> using a nova vm-like container to run application-like containers inside
> them is going to beg the need to tweak more detailed things on the
> vm-like container to avoid restricting the application one, I think.
>
> IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
> nova is because it's nearly free. It is the libvirt driver with a few
> conditionals to handle different things when necessary for LXC. The
> docker driver is a whole other nova driver to maintain, with even less
> applicability to being a system container (IMHO).



Magnum is clearly geared toward greenfield development.

The Docker driver's sweet-spot is for the user wishing to replace existing
VMs and Nova orchestration with high-performance containers without having
to rewrite for Magnum, or having to deal with the complexities or
hardware-specific bits of Ironic (plus having a tad bit more security). As
an Ironic alternative, it may have a promising long-term life. As for a
mechanism for providing legacy migrations... the future is less clear.
Greenfield applications will go straight to Magnum or non-OpenStack
solutions while the number of legacy applications to be migrated from
nova-libvirt/xen/vmware to nova-docker is unknowable. However, I do expect
it to be a number likely to swell, then diminish as time goes on. Arguably,
the same could be presumed about VMs, however.

It's also worth noting that LXD is pushing to be a container-like-a-VM
solution, so support for building tools to provide legacy VM to container
migrations must be of interest to somebody.



>
>
> I think this is likely the case and I'd like to avoid getting into this
> situation again. IMHO, this is not our target audience, it's very much
> not free to just put it into the tree because "meh, some people might
> like it instead of the libvirt-lxc driver".
>  - Do we have a team of people willing and able to commit to
>maintaining it in Nova - ie we don't just want it to bitrot
>and have nova cores left to pick up the pieces whenever it
>breaks.
>
>
The two reasons I have preferred this code stay out of tree (until now) has
been the breaking changes we wished to land, and the community involvement.
This driver was not the first driver I've been involved with that has had
these problems, and ultimately I had wished development were out of tree.
Having the code out of tree has been very good for nova-docker.

However, I believe that the period of high-frequency changes over, with
many of the critical goals reached... but that calls into question your
second point, which is the level of continued maintenance. To this, I
cannot answer, but I will say that the team right now is vanishingly small,
 I do wish it were larger. I give a lot of credit to Dims in particular for
keeping this afloat, but this effort needs more contributors if it is to
stay alive. As for myself, for the record, I am seldom involved at this
point, but do contribute some occasional time into reviews or the odd patch
in my free time.

I'll finish to say that I do think it's finally time to consider pulling it
back it.  While doing so may not attract contributors, I know being in
stackforge has certainly been a deterrent both potential contributors and
users.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Eric Windisch
>From my experience, making fast moving changes is far easier when code is
split out. Changes occur too slowly when integrated.

I'd be +1 on splitting the code out. I expect you will get more done this
way.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-05 Thread Eric Windisch
On Wed, Mar 4, 2015 at 12:10 PM, ozamiatin  wrote:

> Hi,
>
> By this e-mail I'd like to start a discussion about current zmq driver
> internal design problems I've found out.
> I wish to collect here all proposals and known issues. I hope this
> discussion will be continued on Liberty design summit.
> And hope it will drive our further zmq driver development efforts.
>
> ZMQ Driver issues list (I address all issues with # and references are in
> []):
>
> 1. ZMQContext per socket (blocker is neutron improper usage of messaging
> via fork) [3]
> 2. Too many different contexts.
> We have InternalContext used for ZmqProxy, RPCContext used in
> ZmqReactor, and ZmqListener.
> There is also zmq.Context which is zmq API entity. We need to consider
> a possibility to unify their usage over inheritance (maybe stick to
> RPCContext)
> or to hide them as internal entities in their modules (see refactoring
> #6)
>

The code, when I abandoned it, was moving toward fixing these issues, but
for backwards compatibility was doing so in a staged fashion across the
stable releases.

I agree it's pretty bad. Fixing this now, with the driver in a less stable
state should be easier, as maintaining compatibility is of less importance.



> 3. Topic related code everywhere. We have no topic entity. It is all
> string operations.
> We need some topics management entity and topic itself as an entity
> (not a string).
> It causes issues like [4], [5]. (I'm already working on it).
> There was a spec related [7].
>

Good! It's ugly. I had proposed a patch at one point, but I believe the
decision was that it was better and cleaner to move toward the
oslo.messaging abstraction as we solve the topic issue. Now that
oslo.messaging exists, I agree it's well past time to fix this particular
ugliness.


> 4. Manual implementation of messaging patterns.
>Now we can observe poor usage of zmq features in zmq driver. Almost
> everything is implemented over PUSH/PULL.
>
> 4.1 Manual polling - use zmq.Poller (listening and replying for
> multiple sockets)
> 4.2 Manual request/reply implementation for call [1].
> Using of REQ/REP (ROUTER/DEALER) socket solves many issues. A lot
> of code may be reduced.
> 4.3 Timeouts waiting
>

There are very specific reasons for the use of PUSH/PULL. I'm firmly of the
belief that it's the only viable solution for an OpenStack RPC driver. This
has to do with how asynchronous programming in Python is performed, with
how edge-triggered versus level-triggered events are processed, and general
state management for REQ/REP sockets.

I could be proven wrong, but I burned quite a bit of time in the beginning
of the ZMQ effort looking at REQ/REP before realizing that PUSH/PULL was
the only reasonable solution. Granted, this was over 3 years ago, so I
would not be too surprised if my assumptions are no longer valid.



> 5. Add possibility to work without eventlet [2]. #4.1 is also related
> here, we can reuse many of the implemented solutions
>like zmq.Poller over asynchronous sockets in one separate thread
> (instead of spawning on each new socket).
>I will update the spec [2] on that.
>

Great. This was one of the motivations behind oslo.messaging and it would
be great to see this come to fruition.


> 6. Put all zmq driver related stuff (matchmakers, most classes from
> zmq_impl) into a separate package.
>Don't keep all classes (ZmqClient, ZmqProxy, Topics management,
> ZmqListener, ZmqSocket, ZmqReactor)
>in one impl_zmq.py module.
>

Seems fine. In fact, I think a lot of code could be shared with an AMQP v1
driver...


> 7. Need more technical documentation on the driver like [6].
>I'm willing to prepare a current driver architecture overview with some
> graphics UML charts, and to continue discuss the driver architecture.
>

Documentation has always been a sore point. +2

-- 
Regards,
Eric Windisch
ᐧ
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-12 Thread Eric Windisch
ᐧ

>
> from neutron.agent.privileged.commands import ip_lib as priv_ip
> def foo():
> # Need to create a new veth interface pair - that usually requires
> root/NET_ADMIN
> priv_ip.CreateLink('veth', 'veth0', peer='veth1')
>
> Because we now have elevated privileges directly (on the privileged daemon
> side) without having to shell out through sudo, we can do all sorts of
> nicer things like just using netlink directly to configure networking.
> This avoids the overhead of executing subcommands, the ugliness (and
> danger) of generating command lines and regex parsing output, and make us
> less reliant on specific versions of command line tools (since the kernel
> API should be very stable).
>

One of the advantages of spawning a new process is being able to use flags
to clone(2) and to set capabilities. This basically means to create
containers, by some definition. Anything you have in a "privileged daemon"
or privileged process ideally should reduce its privilege set for any
operation it performs. That might mean it clones itself and executes
Python, or it may execvp an executable, but either way, the new process
would have less-than-full-privilege.

For instance, writing a file might require root access, but does not need
the ability to load kernel modules. Changing network interfaces does not
need access to the filesystem, no more than changes to the filesystem needs
access to the network. The capabilities and namespaces mechanisms resolve
these security conundrums and simplify principle of least privilege.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Eric Windisch
>
>
> 1) Cherry pick scheduler code from Nova, which already has a working a
> filter scheduler design.
> 2) Integrate swarmd to leverage its scheduler[2].


I see #2 as not an alternative but possibly an "also". Swarm uses the
Docker API, although they're only about 75% compatible at the moment.
Ideally, the Docker backend would work with both single docker hosts and
clusters of Docker machines powered by Swarm. It would be nice, however, if
scheduler hints could be passed from Magnum to Swarm.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Magnum's First Release

2015-01-20 Thread Eric Windisch
On Tue, Jan 20, 2015 at 11:48 PM, Adrian Otto 
wrote:

> Hello,
>
> The Magnum community is pleased to announce the first release of Magnum
> available now for download from:
> https://github.com/stackforge/magnum/releases/tag/m1


Congratulations to you and everyone else that made this possible!

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-09 Thread Eric Windisch
>
>
> While gating on nova-docker will prevent patches that cause nova-docker to
> break 100% to land, it won't do a lot to prevent transient failures. To fix
> those we need people dedicated to making sure nova-docker is working.
>
>

What would be helpful for me is a way to know that our tests are breaking
without manually checking Kibana, such as an email.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-18 Thread Eric Windisch
>
> I think for this cycle we really do need to focus on consolidating and
> testing the existing driver design and fixing up the biggest
> deficiency (1) before we consider moving forward with lots of new


+1


> 1) Outbound messaging connection re-use - right now every outbound
> messaging creates and consumes a tcp connection - this approach scales
> badly when neutron does large fanout casts.
>


I'm glad you are looking at this and by doing so, will understand the
system better. I hope the following will give some insight into, at least,
why I made the decisions I made:

This was an intentional design trade-off. I saw three choices here: build a
fully decentralized solution, build a fully-connected network, or use
centralized brokerage. I wrote off centralized brokerage immediately. The
problem with a fully connected system is that active TCP connections are
required between all of the nodes. I didn't think that would scale and
would be brittle against floods (intentional or otherwise).

IMHO, I always felt the right solution for large fanout casts was to use
multicast. When the driver was written, Neutron didn't exist and there was
no use-case for large fanout casts, so I didn't implement multicast, but
knew it as an option if it became necessary. It isn't the right solution
for everyone, of course.

For connection reuse, you could manage a pool of connections and keep those
connections around for a configurable amount of time, after which they'd
expire and be re-opened. This would keep the most actively used connections
alive. One problem is that it would make the service more brittle by making
it far more susceptible to running out of file descriptors by keeping
connections around significantly longer. However, this wouldn't be as
brittle as fully-connecting the nodes nor as poorly scalable.

If OpenStack and oslo.messaging were designed specifically around this
message pattern, I might suggest that the library and its applications be
aware of high-traffic topics and persist the connections for those topics,
while keeping others ephemeral. A good example for Nova would be
api->scheduler traffic would be persistent, whereas scheduler->compute_node
would be ephemeral.  Perhaps this is something that could still be added to
the library.

2) PUSH/PULL tcp sockets - Pieter suggested we look at ROUTER/DEALER
> as an option once 1) is resolved - this socket type pairing has some
> interesting features which would help with resilience and availability
> including heartbeating.


Using PUSH/PULL does not eliminate the possibility of being fully
connected, nor is it incompatible with persistent connections. If you're
not going to be fully-connected, there isn't much advantage to long-lived
persistent connections and without those persistent connections, you're not
benefitting from features such as heartbeating.

I'm not saying ROUTER/DEALER cannot be used, but use them with care.
They're designed for long-lived channels between hosts and not for the
ephemeral-type connections used in a peer-to-peer system. Dealing with how
to manage timeouts on the client and the server and the swelling number of
active file descriptions that you'll get by using ROUTER/DEALER is not
trivial, assuming you can get past the management of all of those
synchronous sockets (hidden away by tons of eventlet greenthreads)...

Extra anecdote: During a conversation at the OpenStack summit, someone told
me about their experiences using ZeroMQ and the pain of using REQ/REP
sockets and how they felt it was a mistake they used them. We discussed a
bit about some other problems such as the fact it's impossible to avoid TCP
fragmentation unless you force all frames to 552 bytes or have a
well-managed network where you know the MTUs of all the devices you'll pass
through. Suggestions were made to make ZeroMQ better, until we realized we
had just described TCP-over-ZeroMQ-over-TCP, finished our beers, and
quickly changed topics.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Eric Windisch
On Mon, Nov 17, 2014 at 3:33 PM, Joshua Harlow  wrote:

> It should already be running.
>
> Tooz has been testing with it[1]. Whats running in ubuntu is an older
> redis though so don't expect some of the new > 2.2.0 features to work until
> the ubuntu version is pushed out to all projects.


The redis (soft) requirement for the ZeroMQ driver shouldn't require a
newer version at all.

Also, since I have a platform, I'll note that the redis "matchmaker" driver
is just a reference implementation I tossed together in a day.  It's
convenient because it eliminates the need for a static configuration,
making tempest tests much easier to run and generally easier for anyone to
deploy, but it's intended to be an example of hooking into an inventory
service, not necessarily the defacto solution.


-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Eric Windisch
On Mon, Nov 17, 2014 at 8:43 AM, Denis Makogon 
wrote:

> Good day, Stackers.
>
>
> During Paris Design summit oslo.messaging session was raised good question
> about maintaining ZeroMQ driver in upstream (see section “dropping ZeroMQ
> support in oslo.messaging” at [1]) . As we all know, good thoughts are
> comming always after. I’d like to propose several improvements in process
> of maintaining and developing of ZeroMQ driver in upstream.
>
>
>
I'm glad to see the community looking to revive this driver. What I think
could be valuable if there are enough developers is a sub-team as is done
with Nova. That doesn't mean to splinter the community, but to provide a
focal point for interested developers to interact.

I agree with the idea that this should be tested via Tempest. It's easy
enough to mask off the failing tests and enable more tests as either the
driver itself improves, or support in consuming projects and/or
oslo.messaging itself improves. I'd suggest that effort is better spent
there than building new bespoke tests.

Thanks and good luck! :)

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-17 Thread Eric Windisch
On Mon, Nov 17, 2014 at 5:44 AM, Ilya Pekelny  wrote:

> We want to discuss opportunity of implementation of the p-2-p messaging
> model in oslo.messaging for ZeroMQ driver. Actual architecture
> uses uncharacteristic single broker architecture model. In this way we are
> ignoring the key 0MQ ideas. Lets describe our message in quotes from ZeroMQ
> documentation:
>
>
The oslo.messaging driver is not using a single broker. It is designed for
a distributed broker model where each host runs a broker. I'm not sure
where the confusion comes from that implies this is a single-broker model?

All of the points you make around negotiation and security are new concepts
introduced after the initial design and implementation of the ZeroMQ
driver. It certainly makes sense to investigate what new features are
available in ZeroMQ (such as CurveCP) and to see how they might be
leveraged.

That said, quite a bit of trial-and-error and research went into deciding
to use an opposing PUSH-PULL mechanism instead of REQ/REP. Most notably,
it's much easier to make PUSH/PULL reliable than REQ/REP.


> From the current code docstring:
> ZmqBaseReactor(ConsumerBase):
> """A consumer class implementing a centralized casting broker
> (PULL-PUSH).
>
> This approach is pretty unusual for ZeroMQ. Fortunately we have a bit of
> raw developments around the problem. These changes can introduce
> performance improvement. But to proof it we need to implement all new
> features, at least at WIP status. So, I need to be sure that the community
> doesn't avoid such of improvements.
>

Again, the design implemented expects a broker running per machine (the
zmq-receiver process). Each machine might have multiple workers all pulling
messages from queues. Initially, the driver was designed such that each
topic was mapped to its own ip:port, but this was not friendly to having
arbitrary consumers of the library and required a port mapping file be
distributed with the application. Plus, it's valid to have multiple
consumers of a topic on a given host, something that is only possible with
a distributed broker.

As I left the driver, long review queues prevented me from merging a pile
of changes to improve performance and increase reliability. I believe the
architecture is still sound, even if much of the code itself is bad. What
this driver needs is major cleanup, refactoring, and better testing.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-18 Thread Eric Windisch
>
>
> That’s great feedback, Eric, thank you. I know some of the other projects
> are moving drivers out of the main core tree, and we could certainly
> consider doing that here as well, if we have teams willing to sign up for
> the work it means doing.
>
> In addition to the zmq driver, we have a fairly stable rabbit driver, a
> qpid driver whose quality I don’t know , and a new experimental AMQP 1.0
> driver. Are we talking about moving those out, too, or just zmq because we
> were already considering removing it entirely?
>

I believe it makes good sense for all drivers, in the long term. However,
the most immediate benefits would be in offloading any drivers that need
substantial work or improvements, aka velocity. That would mean the AMQP
and ZeroMQ drivers.

With the Nova drivers, what's useful is that we have tempest and we can use
that as an integration gate. I suppose that's technically possible with
oslo.messaging and its drivers as well, although I prefer to see a
separation of concerns were I presume there are messaging patterns you want
to validate that aren't exercised by Tempest.

Another thing I'll note is that before pulling Ironic in, Nova had an API
contract test. This can be useful for making sure that changes in the
upstream project doesn't break drivers, or that breakages could at least
invoke action by the driver team:
https://github.com/openstack/nova/blob/4ce3f55d169290015063131134f93fca236807ed/nova/tests/virt/test_ironic_api_contracts.py

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-18 Thread Eric Windisch
On Thu, Sep 18, 2014 at 3:55 PM, Doug Hellmann 
wrote:

>
> On Sep 18, 2014, at 10:16 AM, Kapil Thangavelu <
> kapil.thangav...@canonical.com> wrote:
>
> >
> >
> > On Thu, Sep 18, 2014 at 4:18 AM, Flavio Percoco 
> wrote:
> > On 09/17/2014 04:34 PM, Doug Hellmann wrote:
> > > This thread [1] has turned more “future focused", so I’m moving the
> conversation to the -dev list where we usually have those sorts of
> discussions.
> > >
> > > [1]
> http://lists.openstack.org/pipermail/openstack/2014-September/009253.html
> > >
> >
> >
> > I saw this mentioned in the `openstack` thread but I'd like us to
> > reconsider it.
> >
> > Since we've gone through the "hey please don't deprecate it, I'll help"
> > process a couple of times with the zmq driver, I'm thinking we should
> > pull it out of the code base anyway.
> >
> > I think the primary issue has been two fold, one is the lack of
> reviews/merges on extant patches that fix critical items that have been
> outstanding for months. I think that is compounded by the other issue which
> was the lack of tests. We've sprinted last week on adding in both unit
> tests, the extant patches and functionally verifying them by automating
> cloud deployments with zmq for the messaging driver. We're also committing
> as a company to supporting it on an ongoing basis. If we get the gates
> going for kilo, i don't see any reason for the churn below, if the gates
> don't get going we can yank to external in kilo anyway.
>
> I imagine that, as with drivers in other projects, part of the issue here
> is that there are not enough oslo.messaging core reviewers comfortable with
> zmq to feel confident reviewing the driver. Flavio’s proposal also has the
> benefit of addressing this, since some of the currently interested parties
> could be given core reviewer privileges on an oslo.messaging.zmq repository.
>
> For now, I’m in favor of keeping an eye on the current interest and seeing
> how things progress before making a final decision on whether to delete the
> driver, move it to its own repository, or keep it where it is. I’d like to
> hear from some of the rest of oslo-core and oslo-messaging-core about their
> opinions, too, though — it isn’t my call alone.
>

While I'm no longer involved and quite unlikely to again become involved, I
will say that I'm +1 on maintaining it separately.

At the time we merged the ZeroMQ driver, it was suggested that we keep it
out and maintain it outside the tree. I lobbied hard to get it in, and it
was... but it was at great cost. The review process is highly detrimental
to fast-moving code bases and imposes a severe handicap to gaining maturity
and full API compliance, etc. Once the ZeroMQ code went into Nova and
subsequently Oslo, it was incredibly difficult to balance the need to
increment and improve the code and improve testing while also managing
reviews.

The reason I lobbied hard to get the code in was because it couldn't really
be tested much otherwise. We couldn't keep track of changes in Nova and the
needs of OpenStack. Ultimately, as a the project evolved and introduced
changes to messaging for the needs of projects such as Ceilometer, what was
once an advantage became a disadvantage, when coupled with the long review
times.

However, the barriers to maintaining code out of the tree is lower than
ever.  I currently maintain the Docker driver out of Nova's tree and it's
doing fine. Well enough that I no longer want it to go back into Nova's
tree.  While I'd like it to become part of Nova, I'd like to maintain a
separate or superset of core reviewers as not to impede development and
maintenance, without changes to Gerrit, this means using a separate git
repository -- and that's okay.

The only reason I see moving the Docker code back into Nova would be
political, not based on the merit of a technical approach or ease and cost
of maintenance. In the last year, I've become very aware of the financial
requirements that the OpenStack community has unwittingly imposed on its
contributing members and I really wish, as much as possible, to roll this
back and reduce the cost of contributing. Breaking code out, while
accepting it may still be "valid" and "included" (if not core), is a big
step for OpenStack in reducing that cost. Obviously, all I've just said
could be applied to the ZeroMQ driver as well as it applies to Docker.

The OpenStack CI system is now advanced and mature enough that breaking
ZeroMQ out into a stackforge repo and creating dependencies between the
projects and setting up testing will be, in my opinion, better for any new
maintainers and users of this driver.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Eric Windisch
>
>
>  - Each virt driver project gets its own core team and is responsible
>for dealing with review, merge & release of their codebase.
>
> Note, I really do mean *all* virt drivers should be separate. I do
> not want to see some virt drivers split out and others remain in tree
> because I feel that signifies that the out of tree ones are second
> class citizens.


+1. I made this same proposal to Michael during the mid-cycle. However, I
haven't wanted to conflate this issue with bringing Docker back into Nova.
For the Docker driver in particular, I feel that being able to stay out of
tree and having our own core team would be beneficial, but  I wouldn't want
to do this unless it applied equally to all drivers.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps

2014-08-27 Thread Eric Windisch
On Tue, Aug 26, 2014 at 3:35 PM, Martinx - ジェームズ 
wrote:

> Hey Stackers! Wait!   =)
>
> Let me ask something...
>
> Why are you guys using Docker within a VM?!?! What is the point of doing
> such thing?!
>
> I thought Docker was here to entirely replace the virtualization layer,
> bringing a "bare metal-cloud", am I right?!
>
>
>
Although this is getting somewhat off-topic, but it's something the
containers service seeks to support... so perhaps it's a worthy discussion.
It's also a surprisingly common sentiment, so I'd like to address it:

The advantages of Docker are not simply as a lightweight alternative to
virtualization, but to provide portability, transport, and process-level
isolation across hosts (physical, or not). All of those advantages are seen
with virtualization just as well as without. Ostensibly, those that seek to
use Docker to replace virtualization are those that never needed
virtualization in the first place, but needed better systems management
tools. That's fine, because Docker seeks to be that better management tool.

Still, there plenty of valid reasons for virtualization, including, but not
least, the need for multi-tenant isolation, where using virtual or physical
machine boundaries to provide isolation between tenants is highly
advisable. The combined use  of Docker with VMs is an important part of the
Docker security story.

Years ago, I had founded an IaaS service. We had been running a PaaS-like
service and had been trying to move customers to IaaS. We were offloading
the problem of maintaining various application stacks. We had just gone
through the MVC framework hype-cycle and were tired of trying to "pick
winners" to provide support for. Instead, we wanted to simply provide the
hardware, the architecture, and let customers run their own software. It
was great in practice, but users didn't want to do Ops. They wanted to do
Dev. A very small minority ran Puppet of CfEngine.  Ultimately, we found
there was a large gap between users that knew what to do with a server and
those that knew how to build applications. What we needed then was Docker.

Providing hardware, physical or virtual, isn't enough.  A barrier of entry
exists. DevOps works for some, but it's a culture and one that requires
tooling; tooling which often wedges the divide that DevOps seeks to bridge.
That might be fine for a San Francisco infrastructure startup or the
mega-corp, but it's not fine for the sort of users that go to Heroku or
Engineyard. As an industry, we cannot tell new Django developers they must
also learn Chef if they wish to deploy their applications to the cloud. We
also shouldn't teach them to build to some specific, proprietary PaaS.
Lowering the barrier of entry and leveling the field helps everyone, even
those that have always paid the previous price of admission.

What I'm really saying is that much of the value that Docker adds to the
ecosystem has little to do with performance, and performance is the primary
reason for moving away from virtualization. Deciding to trade security for
performance is a decision users might wish to make, but that's only
indicative of the flexibility that Docker offers, not a requirement.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Eric Windisch
>
>
>> On Mon, Aug 18, 2014 at 8:49 AM, Jyoti Ranjan  wrote:
>>
>>> I believe that everything can not go as a dock container. For e.g.
>>>
>>> 1. compute nodes
>>> 2. baremetal provisioning
>>> 3. L3 router etc
>>>
>>
>> Containers are a good solution for all of the above, for some value of
>> container. There is some terminology overloading here, however.
>>
>
> Hi Eric, one more question, not quite understand what you mean for
> "Containers are a good solution for all of the above", you mean docker
> container can manage all of three above? How? Can you please show more
> details? Thanks!
>

I'm not sure this is the right forum for a nuanced explanation of every
use-case and every available option, but I can give some examples. Keep in
mind, again, that even in absence of security constraints offered by
Docker, that Docker provides imaging facilities and server management
solutions that are highly useful. For instance, there are use-cases of
Docker that might leverage it simply for attestation or runtime artifact
management.

First, one could in the case of an L3 router or baremetal provisioning
where host networking is required,  one might specify 'docker run -net
host' to allow the process(es) running inside of the container to operate
as if running on the host, but only as it pertains to networking.
Essentially, it would "uncontain" the networking aspect of the process(es).

As of Docker 1.2, to be released this week, one may also specify "docker
run --cap-add" to provide granular control of the addition of Linux
capabilities that might be needed by processes (see
http://linux.die.net/man/7/capabilities). This allows granular loosing of
restrictions which might allow container-breakout, without fully opening
the gates.  From a security perspective, I'd rather provide some
restrictions than none at all.

On compute nodes, it should be possible to run qemu/kvm inside of a
container. The nova-compute program does many things on a host and it may
be difficult to provide a simplified set of restrictions for it without
running a privileged container (or one with many --cap-add statements,
--net host, etc). Again, while containment might be minimized, the
deployment facilities of Docker are still very useful.  That said, all of
the really "interesting" things done by Nova that require privileges are
done by rootwrap... a rootwrap which leveraged Docker would make
containerization of Nova more meaningful and would be a boon for Nova
security overall.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Eric Windisch
On Mon, Aug 18, 2014 at 8:49 AM, Jyoti Ranjan  wrote:

> I believe that everything can not go as a dock container. For e.g.
>
> 1. compute nodes
> 2. baremetal provisioning
> 3. L3 router etc
>

Containers are a good solution for all of the above, for some value of
container. There is some terminology overloading here, however.

There are Linux namespaces, capability sets, and cgroups which may not be
appropriate for using around some workloads. These, however, are granular.
For instance, one may run a container without networking namespaces,
allowing the container to directly manipulate host networking. Such a
container would still see nothing outside its own chrooted filesystem, PID
namespace, etc.

Docker in particular offers a number of useful features around filesystem
management, images, etc. These features make it easier to deploy and manage
systems, even if many of the "Linux containers" features are disabled for
one reason or another.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Eric Windisch
On Mon, Aug 18, 2014 at 1:42 PM, Adrian Otto 
wrote:

>  If you want to run OpenStack services in Docker, I suggest having a look
> at Dockenstack:
>
>  https://github.com/ewindisch/dockenstack
>
>
Note, this is for simplifying and speeding-up the use of devstack. It
provides an environment similar to openstack-infra that can consistently
and reliably run on one's laptop, while bringing a devstack-managed
OpenStack installation online in 5-8 minutes.

Like other devstack-based installs, this is not for running production
OpenStack deployments.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling silent Docker tests for Nova?

2014-08-15 Thread Eric Windisch
>
> Given resource concerns, maybe just adding it to the experimental
> pipeline would be sufficient?

For clarity, the discussed patch is to promote an existing experimental job
to silent.

Regards -Eric
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Enabling silent Docker tests for Nova?

2014-08-15 Thread Eric Windisch
I have proposed a _silent_ check for Nova for integration of the Docker
driver:

https://review.openstack.org/#/c/114547/

It has been established that this code cannot move back into Nova until the
tests are running and have a solid history of success. That cannot happen
unless we're allowed to run the tests. Running a silent check on changes to
Nova is the first step in establishing that history.

Joe Gordon suggests we need a spec to bring the driver back into Nova.
Besides the fact that specs are closed and there is no intention of
reintegrating the driver for Juno, I'm uncertain of proposing a spec
without first having solid history of successful testing, especially given
the historical context of this driver's relationship with Nova.

If we could enable silent checks, we could help minimize API skew and
branch breakages, improving driver quality and reducing maintenance while
we prepare for the Kilo spec + merge windows. Furthermore, by having a
history of testing, we can seek faster inclusion into Kilo.

Finally, I acknowledge that we may be entering a window of significant load
on the CI servers and I'm sensitive to the needs of the infrastructure team
to remain both focused and to conserve precious compute resources. If this
is an issue, then I'd like to plot a timeline, however rough, with the
infrastructure team.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OS or os are not acronyms for OpenStack

2014-08-15 Thread Eric Windisch
>
>
> "No alterations: When using OpenStack Marks, you shall never vary the
> spelling, hyphenation or spacing of the any portion of the marks.
>
> Examples of Improper Display of an OpenStack Mark: Open-Stack; Open
> Stack; OS Stack Examples of Proper Display of an OpenStack Mark:
> OpenStack; OPENSTACK"
>

When used in marketing, sales, and other business contexts, I obviously
agree. This is proper use and license of a trademark.


> While this comes from the OpenStack Trademark Policy, I think it is
> important to remember this information and to implement it in our daily
> use. I have had to change at least one wikipage so far, it is far easier
> if folks simply employ the correct usage from the beginning.
>

On wiki pages and other published medium -- absolutely.

For our daily use in IRC and other casual discussion? Lets not get pedantic.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] heat docker multi host scheduling support

2014-08-15 Thread Eric Windisch
On Thu, Aug 14, 2014 at 3:21 AM, Malawade, Abhijeet <
abhijeet.malaw...@nttdata.com> wrote:

>  Hi all,
>
>
>
> I am trying to use heat to create docker containers. I have configured
> heat-docker plugin.
>
> I am also able to create stack using heat successfully.
>
>
>
> To start container on different host we need to provide 'docker_endpoint'
> in template. For this we have to provide host address where container will
> run in template.
>
>
>
> Is there any way to schedule docker container on available hosts using
> heat-docker plugin without giving 'docker_endpoint' in template file.
>
> Is heat-docker plugin supports managing docker hosts cluster with
> scheduling logic.
>
>
>
> Please let me know your suggestions on the same.
>
>
>
Zane's responses were correct. Effort is underway, planning the development
of a containers service. This architecture of this service actually looks
quite a bit like the Docker+Heat+Nova story, but simplified through a
Nova-like API.  The spec for this is in progress:
https://review.openstack.org/#/c/114044/

As for scheduling with Docker+Heat, one option is to use swarmd/libswarm.
Unfortunately, this is an early-stage project, so it isn't quite an
out-of-the-box solution; It's not "production-ready", but you're welcome to
investigate it... The usage model with Heat would be to spawn your Nova
instances not only with Docker, but with a connector to a libswarm server
(swarmd). That swarmd process would need to be running an listening
somewhere.  You'd need to load (and possibly write) plugins for
libswarm/swarmd to provide your scheduling. Your docker_endpoint would be
the IP/port of swarmd, and it would proxy those requests to the correct
backend Docker hosts.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container

2014-08-12 Thread Eric Windisch
On Tue, Aug 12, 2014 at 5:53 AM, Jay Lau  wrote:

> I did not have the environment set up now, but by reviewing code, I think
> that the logic should be as following:
> 1) When using nova docker driver, we can use cloud-init or/and CMD in
> docker images to run post install scripts.
> myapp:
> Type: OS::Nova::Server
> Properties:
> flavor: m1.small
> image: my-app:latest  <<<<< docker image
> user-data:  <<<<<<<<<<<
>
> 2) When using heat docker driver, we can only use CMD in docker image or
> heat template to run post install scripts.
> wordpress:
> type: DockerInc::Docker::Container
> depends_on: [database]
> properties:
>   image: wordpress
>   links:
> db: mysql
>   port_bindings:
> 80/tcp: [{"HostPort": "80"}]
>   docker_endpoint:
> str_replace:
>   template: http://host:2345/
>   params:
> host: {get_attr: [docker_host, networks, private, 0]}
> cmd: "/bin/bash" <<<<<<<
>


I can confirm this is correct for both use-cases. Currently, using Nova,
one may only specify the CMD in the image itself, or as glance metadata.
The cloud metadata service should be assessable and usable from Docker.

The Heat plugin allow settings the CMD as a resource property. The
user-data is only passed to the instance that runs Docker, not the
containers. Configuring the CMD and/or environment variables for the
container is the correct approach.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Eric Windisch
On Wed, Jul 16, 2014 at 12:55 PM, Roman Bogorodskiy <
rbogorods...@mirantis.com> wrote:

>   Eric Windisch wrote:
>
> > This thread highlights more deeply the problems for the FreeBSD folks.
> > First, I still disagree with the recommendation that they contribute to
> > libvirt. It's a classic example of creating two or more problems from
> one.
> > Once they have support in libvirt, how long before their code is in a
> > version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
> > requiring changes in libvirt, how long before those fixes are accepted by
> > Nova?
>
> Could you please elaborate why you disagree on the contributing patches
> to libvirt approach and what the alternative approach do you propose?
>

I don't necessarily disagree with contributing patches to libvirt. I
believe that the current system makes it difficult to perform quick,
iterative development. I wish to see this thread attempt to solve that
problem and reduce the barrier to getting stuff done.


> Also, could you please elaborate on what is 'version of libvirt
> acceptable to Nova'? Cannot we just say that e.g. Nova requires libvirt
> X.Y to be deployed on FreeBSD?
>

This is precisely my point, that we need to support different versions of
libvirt and to test those versions. If we're going to support  different
versions of libvirt on FreeBSD, Ubuntu, and RedHat - those should be
tested, possibly as third-party options.

The primary testing path for libvirt upstream should be with the latest
stable release with a non-voting test against trunk. There might be value
in testing against a development snapshot as well, where we know there are
features we want in an unreleased version of libvirt but where we cannot
trust trunk to be stable enough for gate.


> Anyway, speaking about FreeBSD support I assume we actually talking
> about Bhyve support. I think it'd be good to break the task and
> implement FreeBSD support for libvirt/Qemu first


 I believe Sean was referencing to Bhyve support, this is how I interpreted
it.


-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Eric Windisch
On Wed, Jul 16, 2014 at 10:15 AM, Sean Dague  wrote:

> Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
> so we started executing the livesnapshot code in the nova libvirt
> driver. Which fails about 20% of the time in the gate, as we're bringing
> computes up and down while doing a snapshot. Dan Berange did a bunch of
> debug on that and thinks it might be a qemu bug. We disabled these code
> paths, so live snapshot has now been ripped out.
>
> In January we also triggered a libvirt bug, and had to carry a private
> build of libvirt for 6 weeks in order to let people merge code in
> OpenStack.
>
> We never were able to switch to libvirt 1.1.1 in the gate using the
> Ubuntu Cloud Archive during Icehouse development, because it has a
> different set of failures that would have prevented people from merging
> code.
>
> Based on these experiences, libvirt version differences seem to be as
> substantial as major hypervisor differences. There is a proposal here -
> https://review.openstack.org/#/c/103923/ to hold newer versions of
> libvirt to the same standard we hold xen, vmware, hyperv, docker,
> ironic, etc.
>
> I'm somewhat concerned that the -2 pile on in this review is a double
> standard of libvirt features, and features exploiting really new
> upstream features. I feel like a lot of the language being used here
> about the burden of doing this testing is exactly the same as was
> presented by the docker team before their driver was removed, which was
> ignored by the Nova team at the time. It was the concern by the freebsd
> team, which was also ignored and they were told to go land libvirt
> patches instead.
>

For running our own CI, the burden was largely a matter of resource and
time constraints for individual contributors and/or startups to setup and
maintain 3rd-party CI, especially in light of a parallel requirement to
pass the CI itself. I received community responses that equated to, "if you
were serious, you'd dedicate several full-time developers and/or
infrastructure engineers available for OpenStack development, plus several
thousand a month in infrastructure itself".  For Docker, these were simply
not options. Back in January, putting 2-3 engineers fulltime toward
OpenStack would have been a contribution of 10-20% of our engineering
force. OpenStack is not more important to us than Docker itself.

This thread highlights more deeply the problems for the FreeBSD folks.
First, I still disagree with the recommendation that they contribute to
libvirt. It's a classic example of creating two or more problems from one.
Once they have support in libvirt, how long before their code is in a
version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
requiring changes in libvirt, how long before those fixes are accepted by
Nova?

I concur with thoughts in the Gerrit review which suggest there should be a
non-voting gate for testing against the latest libvirt.

I think the ideal situation would be to functionally test against multiple
versions of libvirt. We'd have at least two versions: "trunk,
latest-stable". We might want "trunk, trunk-snapshot-XYZ, latest-stable,
version-in-ubuntu, version-in-rhel", or any number of back-versions
included in the gate. The version-in-rhel and version-in-ubuntu might be
good candidates for 3rd-party CI.


Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-16 Thread Eric Windisch
On Tue, Jul 15, 2014 at 11:55 PM, Michael Still  wrote:

> The containers meetup is in a different room with different space
> constraints, so containers focussed people should do whatever Adrian
> is doing for registration.
>

Interesting. In that case, for those that are primarily attending for
containers-specific matters, but have already registered for the Nova
mid-cycle, should we recommend they release their registrations to help
clear the wait-list?

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread Eric Windisch
On Tue, Jul 15, 2014 at 6:42 PM, Rick Harris 
wrote:

> Hey Michael,
>
> Would love to attend and give an update on where we are with Libvirt+LXC
> containers.
>
> We have bunch of patches proposed and more coming down the pike, so would
> love to get some feedback on where we are and where we should go with this.
>
> I just found out I was cleared to attend yesterday, so that's why I'm late
> in getting registered. Anyway I could squeeze in?
>

While I am registered (and registered early), I'm not sure all of the
containers-oriented folks were originally planning to come prior to last
week's addition of the containers breakout room.

I suspect other containers-oriented folks might yet want to register. If
so, I think now would be the time to speak up.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-11 Thread Eric Windisch
>
>
> > We consider mounting untrusted filesystems on the host kernel to be
> > an unacceptable security risk. A user can craft a malicious filesystem
> > that expliots bugs in the kernel filesystem drivers. This is particularly
> > bad if you allow the kernel to probe for filesystem type since Linux
> > has many many many filesystem drivers most of which are likely not
> > audited enough to be considered safe against malicious data. Even the
> > mainstream ext4 driver had a crasher bug present for many years
> >
> >   https://lwn.net/Articles/538898/
> >   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems
>
> Actually, there's a hidden assumption here that makes this statement not
> necessarily correct for containers.  You're assuming the container has
> to have raw access to the device it's mounting.


I believe it does in the context of the Cinder API, but it does not in the
general context of mounting devices.

I advocate having a filesystem-as-a-service or host-mount-API which nicely
aligns with desires to mount devices on behalf of containers "on the host".
However, it doesn't exclude the fact that there are APIs and services those
contract is, explicitly, to provide block into guests. I'll reiterate again
and say that is where the contract should end (it should not extend to the
ability of guest operating systems to mount, that would be silly).

None of this excludes having an opinion that mounting inside of a guest is
a *useful feature*, even if I don't believe it to be a contractually
obligated one. There is probably no harm in contemplating what mounting
inside of a guest would look like.


> For hypervisors, this
> is true, but it doesn't have to be for containers because the mount
> operation is separate from raw read and write so we can allow or deny
> them granularly.
>

I have been considering allowing containers read-only view of a block
device. We could use seccomp to allow the mount syscall to succeed inside a
container, although it would be forbidden by a missing SYS_CAP_ADMIN
capability. The syscall would instead be trapped and performed by a
privileged process elsewhere on the host.

The read-only view of the block device should not itself be a security
concern. In fact, it could prove to be a useful feature in its own right.
It is the ability to write to the block device which is a risk should it be
mounted.

Having that read-only view also provides a certain awareness to the
container of the existence of that volume. It allows the container to
ATTEMPT to perform a mount operation, even if its denied by policy. That,
of course, is where seccomp would come into play...

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-11 Thread Eric Windisch
>
>
> > Actually, there's a hidden assumption here that makes this statement not
> > necessarily correct for containers.  You're assuming the container has
> > to have raw access to the device it's mounting.  For hypervisors, this
> > is true, but it doesn't have to be for containers because the mount
> > operation is separate from raw read and write so we can allow or deny
> > them granularly.
>

I agree that a container does not have to have raw access to a device that
it is mounting, but I also believe that the right contract for Cinder
support is to expose raw access to those devices into containers. I don't
believe that Cinder support should imply an ability to support mounting the
arbitrary filesystems that might live on those volumes, just as we do not
today require the guest OS on KVM, VMware, or Xen to support mounting any
arbitrary filesystem that might live on a Cinder volume.

It might be stretching the contract slightly to say that containers cannot
currently support ANY of the potential filesystems we might expect on
Cinder volumes, but I really don't think this should be an issue or point
of contention.  I'll remind everyone, too, that raw access to volumes
(block devices) inside of containers is not a useless exercise. There are
valid things one can do with a block device that have nothing to do with
the kernel's ability to mount it as a filesystem.

I believe that for the use-case Cinder typically solves for VMs, however,
containers folks should be backing a new filesystem-as-a-service API. I'm
not yet certain that Manila is the right solution here, but it may be.

Finally, for those that do ultimately want the ability to mount from inside
containers, I think it's ultimately possible. There are ways to allow safer
mounting inside containers with varying trade-offs. I just don't think it's
a necessary part of the Nova+Cinder contract as it pertains to the
capability of the guest, not the capability of the hypervisor (in a sense).


> Where you could avoid the risk is if the image you're getting from
> glance is not in fact a filesystem, but rather a tar.gz of the container
> filesystem. Then Nova would simply be extracting the contents of the
> tar archive and not accessing an untrusted filessytem image from
> glance. IIUC, this is more or less what Docker does.
>

Yes, this is what Docker does.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-25 Thread Eric Windisch
>
>
> I’m reasonably sure that nobody wants to intentionally relax compute host
> security in order to add this new functionality. Let’s find the right short
> term and long term approaches
>

>From our discussions, one approach that seemed popular for long-term
support was to find a way to gracefully allow mounting inside of the
containers by somehow trapping the syscall. It was presumed we would have
to make some change(s) to the kernel for this.

It turns out we can already do this using the kernel's seccomp feature.
Using seccomp, we should be able to trap the mount calls and handle them in
userspace.

References:
*
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/prctl/seccomp_filter.txt?id=HEAD
* http://chdir.org/~nico/seccomp-nurse/

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-13 Thread Eric Windisch
>
>
> Why would you mount it from within the container?  CAP_SYS_ADMIN is a
> per process property, so you use nsenter to execute the mount in the
> required mount namespace with CAP_SYS_ADMIN from outside of the
> container (i.e. the host).  I assume this requires changes to cinder so
> it executes a mount rather than presenting a mountable device node, but
> it's the same type of change we have to do for mounts which have no
> node, like bind mounts.
>

It's a matter of API adherence. You're right, however, another option for
this etherpad is, "extend the API". We could add an extension to OpenStack
that allows the host to initiate a mount inside an instance.  That isn't
much different than the existing suggestion of a container-level API for
speaking back to the host to initiate a mount, other than this suggestion
being at the orchestration layer, rather than at the host-level.

In part, this discussion and the exercise of writing this etherpad is to
explore alternatives to "this isn't a valid use-case".  At a high-level,
the alternatives seem to be to have an API the containers can use speak
back to the host to initiate mounts or finding some configuration of the
kernel (possibly with new features) that would provide a long-term solution.

I'm not fond of an API based solution because it means baking in
expectations of a specific containers-service API such as the Docker API,
or of a specific orchestration API such as the OpenStack Compute API. It
might, however, be a good short-term option.

Daniel also brings up an interesting point about user namespaces, although
I'm somewhat worried about that approach given that we can exploit the host
with crafty filesystems. It had been considered that we could provide
configurations that only allow FUSE. Granted, there might be some
possibility of implementing a solution that would limit containers to
mounting specific types of filesystems, such as only allowing FUSE mounts.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-13 Thread Eric Windisch
On Fri, Jun 13, 2014 at 4:09 AM, Daniel P. Berrange 
wrote:

> On Thu, Jun 12, 2014 at 09:57:41PM +, Adrian Otto wrote:
> > Containers Team,
> >
> > The nova-docker developers are currently discussing options for
> > implementation for supporting mounting of Cinder volumes in
> > containers, and creation of unprivileged containers-in-containters.
> > Both of these currently require CAP_SYS_ADMIN[1] which is problematic
> > because if granted within a container, can lead to an escape from the
> > container back into the host.
>
> NB it is fine for a container to have CAP_SYS_ADMIN if user namespaces
> are enabled and the root user remapped.
>

Part of the discussion was in the context of filesystem modules in the
kernel being an exploit vector. Allowing FUSE is an option for safer mounts
(granted it too needs CAP_SYS_ADMIN).


> Also, we should remember that mounting filesystems is not the only use
> case for exposing block devices to containers. Some applications will
> happily use raw block devices directly without needing to format and
> mount any filesystem on them (eg databases).
>

Correct. This is reflected in the etherpad.  My approach to this question
was already with the presumption there is value in having access to block
devices without  filesystems, but that there would be additional utility
should we have a viable story for mounting filesystems.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Docker] Resource

2014-05-20 Thread Eric Windisch
>
>
>  The solution I propose to this problem is to integrate docker with
> software config, which would allow the Docker api running on a compute
> instance to listen on an unix socket
>

First, thank you for looking at this.

Docker already listens on a unix socket. I'm not as familiar with Heat's
'software config' as I should be, although I attended a couple sessions on
it last week. I'm not sure how this solves the problem? Is the plan to have
the software-config-agent communicate over the network to/from Heat, and to
the instance's local unix socket?

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][nova][docker] dockenstack updated for nova-docker stackforge repo

2014-04-17 Thread Eric Windisch
For those following the dockenstack work, just giving an update that it has
finally been updated to work with the nova-docker stackforge repository.
The new image is now available on the docker index and may be fetched with
'docker pull ewindisch/dockenstack'. Usage instructions are available in
the dockenstack README:
https://github.com/ewindisch/dockenstack/blob/openstack-ci/README.md

Furthermore, I've started testing KVM/Qemu support. It's looking promising.
It's too early to claim it's supported, but I've only ran into minor issues
so far. I'll update again when I've made further progress.  Also pending,
but not far away, is the effort to have dockenstack run devstack-gate,
which will bridge much of the gap between the current dockenstack
environment and that used by openstack-infra for those that wish to quickly
run functional tests locally on their laptops/workstations (or in 3rd-party
CI).

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-16 Thread Eric Windisch
>
>
> > As I really would like to keep a 1:1 matching with my current Devstack
> > installation, have you tried to trick Dockenstack by modifying the
> > localrc file to say Qemu as the driver ?
>

Not yet, but I'd like to. It has been a secondary goal at this point. I
have successfully tested with the Libvirt-LXC driver, however.


> > Of course, it will need to run the container in a privileged mode (and
> > btw. building a Dockerfile with privileged mode is not yet possible) but
> > it sounds possible.
>

It isn't necessary to build with a privileged mode, it only needs to run
that way. Dockenstack already requires this for docker-in-docker anyway, so
it's not an issue to require this for Qemu.


>
> I was doing some experimentation this weekend on that, however with
> straight up devstack in docker the fact that docker actively manages
> /etc/hosts (it's not a real file, it's part of AUFS), complicates some
> things. I also couldn't seemingly get rabbitmq to work in this env.
> Honestly I expect that largely to be about hostname sensitivities, which
> is why we muck with /etc/hosts so much.


I had no problems, but I haven't tested Dockenstack with the Docker 0.9 or
0.10 releases, I last used it on 0.8.1.  I'll be updating the Dockerfile
and testing it throughly with the latest Docker release once we merge the
devstack patches.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Eric Windisch
>
>
> Any disagreements with that goal?
>

No disagreement at all.

Not that we're talking yet about moving the driver back into Nova, I'd like
to take this opportunity to remind anyone interesting in contributing a
Cinder driver that it would be a lot easier if they do it while the driver
is still in Stackforge.

> Correct. If this is intended for infra, it has to use
> devstack-gate. That has lots of levers that we need to set based on
> branches, how to do the zuul ref calculations (needed for the
> speculative gating), how to do branch overrides for stable an
> upgrade jobs, etc.

I suppose I wasn't very clear and what I said may have been misinterpreted.
 I'm certainly not opposed to the integration being introduced into
devstack-gate or testing that way. I'm also happy that someone wants to
contribute on the '-infra' side of things (thank you Derek!). In part, my
response earlier was to point to work that has already been done, since
Derek pointedly asked me about those efforts.

Derek: for more clarification on the Tempest work, however, most of the
patches necessary for using Docker with Tempest have been merged into
Tempest itself. Some patches were rejected or expired. I can share these
with you. Primarily, these patches were to make tempest work with Cinder,
Neutron, suspend/unsuspend, pause/resume, and snapshots disabled. Snapshot
support exists in the driver, but has an open bug that prevents tempest
from passing. Neutron support is now integrated into the driver. Primarily,
the driver lacks support for suspend/unsuspend and pause/resume.

As for dockenstack, this might deserve a separate thread. What I've done
here is build something that may be useful to openstack-infra and might
necessitate further discussion. It's the fastest way to get the Docker
driver up and running, but that's aside to its more generic usefulness as a
potential tool for openstack-infra. Basically, I do not see dockenstack as
being in conflict with devstack-gate. If anything, it overlaps more with
'install_jenkins_slave.sh'.

What's nice about dockenstack is that improving that infrastructure can be
easily tested locally and as the jobs are significantly less dependent on
that infrastructure, those jobs may be easily run on a developer's
workstation. Have you noticed that jobs in the gate run significantly
faster than devstack on a laptop? Does that have to be the case? Can we not
consolidate these into a single solution that is always fast for everyone,
all the time? Something used in dev and gating? Something that might reduce
the costs for running openstack-infra?  That's what dockenstack is.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Eric Windisch
>
>
> What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
> this will setup a devstack based nova with the nova-docker driver and
> can then run what ever tests make sense (currently only a minimal test,
> Eric I believe you were looking at tempest support maybe it could be
> hooked in here?).
>

I'm not sure how far you've gotten, but my approach had been not to use
devstack-gate, but to build upon dockenstack (
https://github.com/ewindisch/dockenstack) to hasten the tests.

Advantages to this over devstack-gate are that:
1) It is usable for developers as an alternative to devstack-vagrant so it
may be the same environment for developing as for CI.
2) All network-dependent resources are downloaded into the image -
completely eliminating the need for mirrors/caching infrastructure.
3) Most of the packages are installed and pre-configured inside the image
prior to running the tests such that there is little time spent
initializing the testing environment.

Disadvantages are:
1) It's currently tied to Ubuntu. It could be ported to Fedora, but hasn't
been.
2) Removal of apt/rpm or even pypi dependencies may allow for
false-positive testing results (if a dependency is removed from a
requirements.txt or devstack's packages lists, it will still be installed
within the testing image); This is something that could be easily fixed if
should it be essential.

If you're interested, I'd be willing to entertain adding Fedora support to
Dockenstack.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Status of Docker CI

2014-02-28 Thread Eric Windisch
>
>
> The number of things that don't work with this driver is a big issue, I
> think.  However, we haven't really set rules on a baseline for what we
> expect every driver to support.  This is something I'd like to tackle in
> the Juno cycle, including another deadline.


Increased feature parity is something I'd like to see as well, but also
something that has been difficult to accomplish in tandem with the CI
requirement. Thankfully, the CI requirement will make it easier to test and
verify changes as we seek to add features in Juno.


> I would
> sprint toward getting everything passing, even if it means applying
> fixes to your env that haven't merged yet to demonstrate it working sooner.
>

This is precisely what I'm doing. I have been submitting patches into
code-review but have been testing and deploying off my own branch which
includes these patches (using the NOVA_REPO / NOVA_BRANCH variables in
devstack).

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Status of Docker CI

2014-02-27 Thread Eric Windisch
We have a Jenkins server and slave configuration that has been tested and
integrated into upstream OpenStack CI.  We do not yet trigger on rechecks
due to limitations of the Gerrit Jenkins trigger plugin.  However,  Arista
has published a patch for this that we may be able to test.  Reporting into
OpenStack Gerrit has been tested, but is currently disabled as we know that
tests are failing. Re-enabling the reporting is as simple as clicking a
checkbox in Jenkins, however.

The test itself where we bring Nova up with the Docker plugin and run
tempest against it is working fairly well. The process of building the VM
image and running it is fully automated and running smoothly. Nova is
installed, started, and tempest runs against it.

Tempest is working without failures on the majority of tests. To speed
development I've been concentrating on the "tempest.api.compute" tests. To
date, I've only disabled neutron, cinder, and the v3 api. I expect that
I'll need to disable the config_drive and migration extensions as we do not
support these features in our driver. I haven't yet identified any other
extensions that do not work.

Tuesday's pass/fail for Tempest was 32 failures to 937 tests. The total
number of tests is as low as 937 because this only includes the compute api
tests, knowing that we're passing or skipping all other test suites.

Since Tuesday, I've made a number of changes including bugfixes for the
Docker driver and disabling of the config_drive and migration extensions.
I'm still running tempest against these changes, but expect to see fewer
than 20 failing tests today.

Here is a list of the tests that failed as of Tuesday:
 http://paste.openstack.org/show/69566/

Related changes in review:
* Nova
  - https://review.openstack.org/#/c/76382/
  - https://review.openstack.org/#/c/76373/
* Tempest
  - https://review.openstack.org/#/c/75267/
  # following have -1's for me to review, may be rolled into a single patch
  - https://review.openstack.org/#/c/75249/
  - https://review.openstack.org/#/c/75254/
  - https://review.openstack.org/#/c/75274/

A fair number of the remaining failures are timeout errors creating tenants
in Keystone and uploading images into Glance. It isn't clear why I'm seeing
these errors, but I'm going to attempt increasing the timeout. There may be
some more subtle problem with my environment, or it may simply be a matter
of performance, but I doubt these issues are specific to the Docker
hypervisor.

Because we don't support Neutron and the v3 api doesn't work with
nova-network, I haven't yet concentrated effort into v3. Having done some
limited testing of the v3 API, however, I've seen relatively few failures
and most or all overlapped with the existing v2 failures. I'm not sure how
Russell or the community feels about skipping Tempest tests for v3, and I
would like to try making these pass, but I presently see it as lower
priority versus making v2 work and pass.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-15 Thread Eric Windisch
>
>
> @Eric, At the same time, I had to update
> /home/dstack/devstack/lib/nova_plugins/hypervisor-docker to include sudo
> for all docker commands. I think that could be one change we would need in
> docker and a documentation update for not using 127.0.0.1. I will create a
> bug and will submit a patch with the changes. Do you agree with this?
>

You can alternatively add the 'stack' user to the 'docker' group, this is
preferable to using sudo. I agree a bug report should be filed, if it isn't
there already.

I'm thinking that the docker driver can simply warn or exit if HOST_IP is
set to 127.0.0.1 as the error that is received currently is certainty not
obvious enough.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-15 Thread Eric Windisch
>
>
> I think this is because your OS_GLANCE_URL and OS_AUTH_URL are set to
> 127.0.0.1, and that address inside of the docker container is different
> than the address outside (e.g. on the host itself).  If you have glance
> and keystone running on a non-localhost address, then the traffic inside
> of the container would be able to route correctly to the host and reach
> the services.
>
>
Swapnil, try adding a value for HOST_IP into your localrc, matching your
machine's IP address.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Eric Windisch
>
>
>
>> About spur: spur is looks ok, but it a bit complicated inside (it uses
>> separate threads for non-blocking stdin/stderr reading [1]) and I don't
>> know how it would work with eventlet.
>>
>
> That does sound like it might cause issues. What would we need to do to
> test it?
>

Looking at the code, I don't expect it to be an issue. The monkey-patching
will cause eventlet.spawn to be called for threading.Thread. The code looks
eventlet-friendly enough on the surface. Error handing around file
read/write could be affected, but it also looks fine.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-08 Thread Eric Windisch
On Tue, Jan 7, 2014 at 11:13 PM, Swapnil Kulkarni <
swapnilkulkarni2...@gmail.com> wrote:

> Let me know in case I can be of any help getting this resolved.
>

Please try running the failing 'docker run' command manually and without
the '-d' argument. I've been able to reproduce  an error myself, but wish
to confirm that this matches the error you're seeing.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-07 Thread Eric Windisch
On Tue, Jan 7, 2014 at 1:16 AM, Swapnil Kulkarni <
swapnilkulkarni2...@gmail.com> wrote:

> Thanks Eric.
>
> I had already tried the solution presented on ask.openstack.org.
>

It was worth a shot.

I also found a bug [1] and applied code changes in [2], but to no success.


Ah. I hadn't seen that change before. I agree with Sean's comment, but we
can fix up your change.

I was just curious to know if anyone else is working on this or can provide
> some pointers from development front.
>

I'm in the process of taking over active development and maintenance of
this driver from Sam Alba.

I'll try and reproduce this myself.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-06 Thread Eric Windisch
>
>
> I am trying to setup devstack with docker driver and facing some issues
> related to docker-registry. Opened an issue with docker team [1] but to no
> response. Has anybody done devstack setup with docker driver?
>
>
Also, I'm sorry you didn't get a speedy response through other means, but
we should take this off the developer list which is not for usage questions
(those usually go to the general list, operator list, or Ask OpenStack)

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-06 Thread Eric Windisch
On Tue, Jan 7, 2014 at 12:27 AM, Swapnil Kulkarni <
swapnilkulkarni2...@gmail.com> wrote:

> Hello,
>
> I am trying to setup devstack with docker driver and facing some issues
> related to docker-registry. Opened an issue with docker team [1] but to no
> response. Has anybody done devstack setup with docker driver?
>
> [1] https://github.com/dotcloud/docker/issues/3316
>
>
Did you run the tools/docker/install_docker.sh script?

Please try the recommended solution at Ask OpenStack:
https://ask.openstack.org/en/question/6633/errors-using-docker-with-openstack/

You might also try manually launching docker-registry, or stopping it if it
is already running.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-12-13 Thread Eric Windisch
On Fri, Dec 13, 2013 at 1:19 PM, Chuck Short wrote:

> Hi,
>
> I have definitely seen a drop off in the proposed Container-Service API
> discussion
>

There was only one action item from the meeting, which was a compilation of
use-cases from Krishna.

Krishna, have you made progress on the use-cases? Is there a wiki page?

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-03 Thread Eric Windisch
> One other issue with the proposed GCE changes is that it uses the custom
> wsgi which we are trying to phase out eventually.  Should we be suggesting
> that new APIs use Pecan/WSME?

Nova isn't using Pecan/WSME for any of its API services. Is Nova truly
going in this direction? I'd think that consistency with Nova would
take precedence over perceived overall OpenStack momentum. The GCE
code could be updated along with the rest of Nova.  However, if the
community is committed to migrating Nova over to Pecan/WSME across the
board, it shouldn't be too problematic to update the GCE code prior to
submission.

Does anyone else care to comment or discuss using Pecan/WSME? Alex
Levine? Doug Hellman? Russell?

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-02 Thread Eric Windisch
I'd like to move this conversation along. It seems to have both
stalled and digressed.

> Just like with compute drivers, the raised bar applies to existing
> drivers, not just new ones.  We just gave a significant amount of
> lead-time to reach it.

I agree with this. The bar needs to be raised. The Tempest tests we
have should be passing. If they can't pass, they shouldn't be skipped,
the underlying support in Nova should be fixed. Is anyone arguing
against this?

The GCE code will need Tempest tests too (thankfully, the proposed
patches include Tempest tests). While it might be an even greater
uphill battle for GCE, versus EC2, to gain community momentum, it
cannot ever gain that momentum without an opportunity to do so. I
agree that Stackforge might normally be a reasonable path here, but I
agree with Mark's reservations around tracking the internal Nova APIs.

> I'm actually quite optimistic about the future of EC2 in Nova.  There is
> certainly interest.  I've followed up with Rohit who led the session at
> the design summit and we should see a sub-team ramping up soon.  The
> things we talked about the sub-team focusing on are in-line with moving

It sounds like the current model and process, while not perfect, isn't
too dysfunctional. Attempting to move the EC2 or GCE code into a
Stackforge repository might kill them before they can reach that bar
you're looking to set.

What more is needed from the blueprint or the patch authors to proceed?

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-11-29 Thread Eric Windisch
> Based on that, I would like to say that we do not add new features to
> incubated code after it starts moving into a library, and only provide
> "stable-like" bug fix support until integrated projects are moved over to
> the graduated library (although even that is up for discussion). After all
> integrated projects that use the code are using the library instead of the
> incubator, we can delete the module(s) from the incubator.

+1

Although never formalized, this is how I had expected we would handle
the graduation process. It is also how we have been responding to
patches and blueprints offerings improvements and feature requests for
oslo.messaging.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-22 Thread Eric Windisch
On Fri, Nov 22, 2013 at 11:49 AM, Krishna Raman  wrote:
> Reminder: We are meting in about 15 minutes on #openstack-meeting channel.

I wasn't able to make it. Was meeting-bot triggered? Is there a log of
today's discussion?

Thank you,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-19 Thread Eric Windisch
On Tue, Nov 19, 2013 at 1:02 PM, James Bottomley
 wrote:
> On Mon, 2013-11-18 at 14:28 -0800, Stuart Fox wrote:
>> Hey all
>>
>> Not having been at the summit (maybe the next one), could somebody
>> give a really short explanation as to why it needs to be a separate
>> service?
>> It sounds like it should fit within the Nova area. It is, after all,
>> just another hypervisor type, or so it seems.
>
> I can take a stab at this:  Firstly, a container is *not* a hypervisor.
> Hypervisor based virtualisation is done at the hardware level (so with
> hypervisors you boot a second kernel on top of the virtual hardware),
> container based virtualisation is done at the OS (kernel) level (so all
> containers share the same kernel ... and sometimes even huge chunks of
> the OS). With recent advances in the Linux Kernel, we can make a
> container behave like a hypervisor (full OS/IaaS virtualisation), but
> quite a bit of the utility of containers is that they can do much more
> than hypervisors, so they shouldn't be constrained by hypervisor APIs
> (which are effectively virtual hardware APIs).
>
> It is possible to extend the Nova APIs to control containers more fully,
> but there was resistance do doing this on the grounds that it's
> expanding the scope of Nova, hence the new project.

It might be worth noting that it was also brought up that
hypervisor-based virtualization can offer a number of features that
bridge some of these gaps, but are not supported in, nor may ever be
supported in Nova.

For example, Daniel brings up an interesting point with the
libvirt-sandbox feature. This is one of those features that bridges
some of the gaps. There are also solutions, however brittle, for
introspection that work on hypervisor-driven VMs. It is not clear what
the scope or desire for these features might be, how they might be
sufficiently abstracted between hypervisors and guest OSes, nor how
these would fit into any of the existing or planned compute API
buckets.

Having a separate service for managing containers draws a thick line
in the sand that will somewhat stiffen innovation around
hypervisor-based virtualization. That isn't necessarily a bad thing,
it will help maintain stability in the project. However, the choice
and the implications shouldn't be ignored.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-11-18 Thread Eric Windisch
> It seems to me that having this separation of concerns in oslo.messaging
> would be good idea. My plan is to move out the configuration object out
> of the basic object, like I did in the first patch.
>
> I don't plan to break the configuration handling or so, I just think it
> should be handled in a separate, individually testable part of the code.
>
> Ultimately, this would allow oslo.messaging to not be 'oslo' only, and
> just being friendly with oslo.config and therefore OpenStack. A goal I
> wish we'd had in more oslo library. :)

I'd like to see more decoupling here with oslo.config in a similar way
that eventlet may be used, but is no longer necessary, when using
oslo.messaging. That is to say, I agree with your premise, even if the
patch and blueprint need work.

I agree with Mark that a clear blueprint and plan should be outlined.
I'm not happy with the blueprint as written, it is sparse, outlining
your intention, rather than a plan of attack. I'd really like to know
what the plan of attack here is, how this will affect the
oslo.messaging API, what it will look like once complete.  It
shouldn't break the public API and if for some reason you don't think
it is possible to keep that contract, it should be discussed.
Personally, I'd be happy with a wiki page outlining these changes,
linked to from the blueprint.

Also, I've noticed that your blueprint is registered underneath
Ceilometer.  Once you move or recreate the blueprint, please email the
list with the updated URL.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Eric Windisch
>> Each project should directly use the standard uuid module or implement its
>> own helper function to generate uuids if this patch gets in.
>>
>> Any thoughts on this change? Thanks.
>
>
> Unfortunately it looks like that change went through before I caught up on
> email. Shouldn't we have removed its use in the downstream projects (at
> least integrated projects) before removing it from Oslo?

I don't think it is a problem to remove the code in oslo first, as
long as no other oslo-incubator code uses it. Projects don't have to
sync the code and could always revert should that they do.

However, like Mark, I'm inclined to consider the value of
is_uuid_like. While undoubtedly useful, is one method sufficient to
warrant creating a new top-level module. Waiting for it to hit the
standard library will take quite a long time...

There are other components of oslo that are terse and questionable as
standalone libraries. For these, it might make sense to aggressively
consider rolling some modules together?

One clear example would be log.py and log_handler.py, another would be
periodic_task.py and loopingcall.py

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Token Revocation using hash of token id

2013-11-12 Thread Eric Windisch
On Tue, Nov 12, 2013 at 2:47 PM, Dolph Mathews  wrote:
> Relevant etherpad -
> https://etherpad.openstack.org/p/icehouse-token-revocation

Thank you. I had presumed the etherpad was out of date as the
discussion at the summit centered around what appeared to be a text
editor (vim or emacs), rather than a webpage.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Token Revocation using hash of token id

2013-11-12 Thread Eric Windisch
During the token revocation discussion at the summit, I suggested it
would be possible to revoke tokens using a hash of the token id (which
is already an MD5 hash). That way, the revocation file would be able
to specify individual hashes for revocation without dangerously
presenting secrets.

I should amend that suggestion to say that should this be done, the
hash will need to be salted. Otherwise, rainbow tables could be used
to attack the original secrets. In fact, this would be exacerbated by
the fact there would be a limited domain to the hash function, knowing
that the input would always be the 128bit output of MD5.

This much might be obvious, but I felt it was worth clarifying and
etching into the blueprint or other design documentation.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Eric Windisch
On Fri, Aug 16, 2013 at 9:31 AM, Victor Sergeyev  wrote:
> Hello All.
>
> Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
> questions about Oslo DB code, and why is it so important to use it instead
> of custom implementation and so on. As there were a lot of questions it was
> really hard to answer on all this questions in IRC. So we decided that
> mailing list is better place for such things.
>
> List of main questions:
>
> 1. What includes oslo DB code?
> 2. Why is it safe to replace custom implementation by Oslo DB code?

Just to head off these two really quick. The database code in Oslo as
initially submitted was actually based largely from that in Glance,
merging in some of the improvements made in Nova. There might have
been some divergence since then, but migrating over shouldn't be
terribly difficult. While it isn't necessary for Glance to switch
over, it would be somewhat ironic if it didn't.

The database code in Oslo primarily keeps base models and various
things we can easily share, reuse, and improve across projects. I
suppose a big part of this is the session management which has been
moved out of api.py and into its own module of session.py. This
session management code is probably what you'll most have to decide is
worthwhile bringing in and if Glance really has such unique
requirements that it needs to bother with maintaining this code on its
own.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] ack(), reject() and requeue() support in rpc ...

2013-08-15 Thread Eric Windisch
On Wed, Aug 14, 2013 at 4:08 PM, Sandy Walsh  wrote:
> At Eric's request in https://review.openstack.org/#/c/41979/ I'm
> bringing this to the ML for feedback.

Thank you Sandy.

> Currently, oslo-common rpc behaviour is to always ack() a message no
> matter what.

Actually, the Qemu and Kombu drivers default to this. The behavior and
the expectation of the abstraction itself is different, in my opinion.
The ZeroMQ driver doesn't presently support acknowledgements and
they're not supported or exposed by the abstraction itself.

The reason I've asked for a mailing list post is because
acknowledgements aren't presently baked into the RPC abstraction/API.
You're suggesting that the idea of acknowledgements leaks into the
abstraction. It isn't necessarily bad, but it is significant enough I
felt it warranted visibility here on the list.

> Since each notification has a unique message_id, it's easy to detect
> events we've seen before and .reject() them.

Only assuming you have a very small number of consumers or
store/lookup the seen-messages in a global state store such as
memcache. That might work in the limited use-cases you intend to
deploy this, but might not be appropriate at the level of a general
abstraction. I've seen that features we support such as fanout() get
horribly abused simply because they're available, used outside their
respective edge-cases, for patterns they don't work well for.

I suppose there is much to be said about giving people the leverage to
shoot themselves in their own feet, but I'm interested in knowing more
about how you intend to implement the rejection mechanism. I assume
you intend to implement this at the consumer level within a project
(i.e. Ceilometer), or is this something you intend to put into
service.py?

Also, fyi, I'm not actually terribly opposed to this patch. It makes
some sense. I just want to make sure we don't foul up the abstraction
in some way or unintentionally give developers rope they'll inevitably
strangle themselves on.

--
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Eric Windisch
> Speaking of preferred ways to port, has there been any discussion about
> which version takes precedence when we have to do different things? For
> example, with imports, should we be trying the 2.x name first and falling
> back to 3.x on ImportError, or vice versa?
>

Are we having it now? My belief here is we should be following the
principle of "ask forgiveness, not permission". Try Python 3 and then
fallback to Python 2 whenever possible.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-23 Thread Eric Windisch
On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton wrote:

> I'm sure this has been asked before, but what exactly is the plan for
> Python 3 support?
>
> Is the plan to support 2 and 3 at the same time? I was looking around for
> a blue print or something but I can't seem to find anything.
>
> I suppose a wiki page is due.  This was discussed at the last summit:
https://etherpad.openstack.org/havana-python3

The plan is to support Python 2.6+ for the 2.x series and Python 3.3+. This
effort has begun for libraries (oslo) and clients. Work is appreciated on
the primary projects, but will ultimately become stalled if the library
work is not first completed.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Opinions needed: Changing method signature in RPC callback ...

2013-07-18 Thread Eric Windisch
> > These callback methods are part of the Kombu driver (and maybe part of
> > Qpid), but are NOT part of the RPC abstraction. These are private
> > methods. They can be broken for external consumers of these methods,
> > because there shouldn't be any. It will be a good lesson to anyone that
> > tries to abuse private methods.
>
> I was wondering about that, but I assumed some parts of amqp.py were
> used by other transports as well (and not just impl_kombu.py)
>
> There are several callbacks in amqp.py that would be affected.


The code in amqp.py is used by the Kombu and Qpid drivers and might
implement the public methods expected by the abstraction, but does not
define it. The RPC abstraction is defined in __init__.py, and does not
define callbacks. Other drivers, granted only being the ZeroMQ driver at
present, are not expected to define a callback method and as a private
method -- would have no template to follow nor an expectation to have this
method.

I'm not saying your proposed changes are bad or invalid, but there is no
need to make concessions to the possibility that code outside of oslo would
be using callback(). This opens up the option, besides creating a new
method, to simply updating all the existing method calls that exist in
amqp.py, impl_kombu.py, and impl_qpid.py.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Opinions needed: Changing method signature in RPC callback ...

2013-07-18 Thread Eric Windisch
On Thu, Jul 18, 2013 at 10:09 AM, Sandy Walsh wrote:

>
> My worry is busting all the other callbacks out there that use
> olso-common.rpc
>

These callback methods are part of the Kombu driver (and maybe part of
Qpid), but are NOT part of the RPC abstraction. These are private methods.
They can be broken for external consumers of these methods, because there
shouldn't be any. It will be a good lesson to anyone that tries to abuse
private methods.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

2013-07-17 Thread Eric Windisch
>
> And this doesn't make sense:
>
>   Target(exchange='nova', topic='compute', server='compute1', fanout=True)

That isn't strictly true. We aren't using it and I'm not sure why we would
want to use it, but it isn't senseless as you imply.

There should be no reason at present with the old api that someone couldn't
consume a topic "compute.hostname" as a fanout queue. If this doesn't work
with RabbitMQ or Qpid, that is a bug.

Practically, you can have multiple processes consuming the same
topic/exchange on a single server. The Nova scheduler is a good example of
where this works (whether or not it is a good idea).  Two processes on
"host1" can consume exchange "nova" with topic "scheduler". A fanout from
nova-api should reach both processes. If for some reason, "host1" was
explicitly requested, a fanout should still reach both schedulers on
"host1" (but not schedulers on other hosts).

Since we're talking about AMQP 1.0 here, I should note that the necessity
to support multiple processes consuming the same (topic, exchange, server)
grouping is why we have an *-rpc-zmq-receiver process for ZeroMQ's
peer-to-peer messaging construct. Not just for fanout, but also for
queuing. With non-fanout messaging to a specific (topic, exchange, server)
group, messages should be queued and consumed in turn by the various
processes on the host consuming that grouping. The *-rpc-zmq-receiver
process provides the local queuing and fanout for multiple consumers of a
single topic on a server for these reasons. An AMQP 1.0 driver would need
to have a similar process and could leverage some code reuse.

Again, I'm pressed to see where explicit server + fanout would be useful.
 One example might be an in-memory state update between local processes
using something akin to the following target:

 Target(exchange='nova', topic='scheduler', server=CONF.host,
fanout=True)

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

2013-07-16 Thread Eric Windisch
>
>
> >   Target(exchange='nova', topic='compute')
> >   Target(exchange='nova', topic='compute', server='compute1')
> >   Target(exchange='nova', topic='compute', fanout=True)
> >
> > In the first case, any compute service will do. In the second, you want
> > to invoke the method on a particular compute service. The the latter
> > case, you want to invoke it on all compute services.
>
> This really helps understand some of what I've read. Thanks.
>
> It seems that exchange is really just a high level qualifier of a
> namespace for the most part.
>
> Q: if in the above last Target, fanout was false (fanout=False) would that
> mean that you are expecting queue type behavior in that instance? i.e. I
> want only one consumer, I don't care which one, but only one consumer to
> service this request? So that syntax would change the semantics from
> pub/sub topic (i.e. all subscribers to the topic get it) to a queue
> semantic (first consumer to acquire the message causes it to dequeue and be
> not available to others?
>

Correct. Provided fanout=False, the behavior would be the same as with the
first example.

Also, I should add that there is a module in Oslo called the Matchmaker to
allow consumers to subscribe their addresses to virtual queues in a
peer-to-peer setting. This is presently used by the ZeroMQ driver. Because
a pure peer-to-peer system has no centralized broker, there needs to be
some peer tracker to provide an analogue to a queue. It would be possible
for an AMQP 1.0 based driver to leverage this module.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] AMQP Version upgrade plans?

2013-07-08 Thread Eric Windisch
> > I actually have a big interest in adding support for AMQP 1.0.  One of
> > the *really* interesting things with AMQP 1.0 that is relevant for
> > OpenStack is that it is not tied to using a message broker.  You can use
> > it in both a peer-to-peer and a brokered way.  This is interesting,
> > because depending on the messaging pattern we're using, we may want one
> > vs the other in different parts of OpenStack.
> > As for the specifics to doing this, the only AMQP 1.0 client I know of
> > is Proton.  It does have Python bindings.
> >
>
> AMQP 0-{8,9,10} are asymmetric client/server protocols with a well-defined
> server/broker behavior. AMQP 1.0, by contrast, is a symmetric wire-line
> protocol, without a requirement for a broker at all.  As Russell points
> out, this opens the possibility of brokerless peer-to-peer messaging.
>
> The problem with peer-to-peer at the transport is that it doesn't scale
> well, doesn't provide fanout, requires complicated URL management, and
> causes problems with firewalls.  It might be appropriate for a small number
> of use cases, but a broker or some other form of intermediary is still
> needed.
>

The ZeroMQ driver in OpenStack RPC already does peer-to-peer messaging and
overcomes the fanout and other issues through code that already resides in
oslo.rpc. Most of that functionality lies in the "matchmaker" which can, as
you suggest, integrate an intermediary for the purpose of peer tracking.

I intentionally approached the ZeroMQ / MatchMaker driver design to
facilitate the use of other transports. The primary purpose of the ZeroMQ
driver is not to leverage a buzzword, but exists for the specific purpose
of offering a peer-to-peer solution.

If someone so strongly desires and prefers AMQP 1.0 over ZeroMQ for
peer-to-peer messaging that they'll write and maintain an implementation
for oslo.rpc / oslo.messaging, I'd be happy to see it introduced. I suspect
there is much code that could be shared and reused, as well.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev