Would appreciate feedback / opinions on this blueprint:
https://blueprints.launchpad.net/nova/+spec/first-discover-your-peers
The idea is: clustered services typically run some sort of gossip protocol,
but need to find (just) one peer to connect to. In the physical
environment, this was done
before me doesn’t really help if they’ve
been
deleted (but are still in the cached values) does it ?
Since there is some external agent creating these instances, why can’t
that
just provide the details directly as user defined metadata ?
Phil
From: Justin Santa Barbara [mailto:jus
On Fri, Jan 24, 2014 at 12:55 PM, Day, Phil philip@hp.com wrote:
I haven't actually found where metadata caching is implemented,
although the constructor of InstanceMetadata documents restrictions that
really only make sense if it is. Anyone know where it is cached?
Here’s the code
Well if you're on a Neutron private network then you'd only be DDOS-ing
yourself.
In fact I think Neutron allows broadcast and multicast on private
networks, and
as nova-net is going to be deprecated at some point I wonder if this is
reducing
to a corner case ?
Neutron may well re-enable
Clint Byrum cl...@fewbar.com wrote:
Heat has been working hard to be able to do per-instance limited access
in Keystone for a while. A trust might work just fine for what you want.
I wasn't actually aware of the progress on trusts. It would be helpful
except (1) it is more work to have to
Murray, Paul (HP Cloud Services) wrote:
Multicast is not generally used over the internet, so the comment about
removing multicast is not really justified, and any of the approaches that
work there could be used.
I think multicast/broadcast is commonly used 'behind the firewall', but I'm
Fox, Kevin M wrote:
Would it make sense to simply have the neutron metadata service
re-export every endpoint listed in keystone at
/openstack/api/endpoint-name?
Do you mean with an implicit token for read-only access, so the instance
doesn't need a token? That is a superset of my proposal,
I suppose we disagree on this fundamental point then.
Heat's value-add really does come from solving this exact problem. It
provides a layer above all of the other services to facilitate expression
of higher level concepts. Nova exposes a primitive API, where as Heat is
meant to have a more
Day, Phil wrote:
We already have a mechanism now where an instance can push metadata as
a way of Windows instances sharing their passwords - so maybe this could
build on that somehow - for example each instance pushes the data its
willing to share with other instances owned by the same
sharing policies (e.g.
when we have hierarchical projects)
Thank you to everyone for helping me get to what IMHO is a much better
solution than the one I started with!
Justin
On Tue, Jan 28, 2014 at 4:38 AM, John Garbutt j...@johngarbutt.com wrote:
On 27 January 2014 14:52, Justin Santa
Ishaya
vishvana...@gmail.com wrote:
On Jan 28, 2014, at 12:17 PM, Justin Santa Barbara jus...@fathomdb.com
wrote:
Thanks John - combining with the existing effort seems like the right
thing to do (I've reached out to Claxton to coordinate). Great to see
that the larger issues around quotas
Given the issues we continue to face with achieving stable APIs, I
hope there will be some form of formal API review before we approve
any new OpenStack APIs. When we release an API, it should mean that
we're committing to support that API _forever_.
Glancing at the specification, I noticed some
Jarret Raim wrote:
I'm presuming that this is our last opportunity for API review - if
this isn't the right occasion to bring this up, ignore me!
I wouldn't agree here. The barbican API will be evolving over time as we
add new functionality. We will, of course, have to deal with backwards
Russell Bryant wrote:
So, it seems that at the root of this, you're looking for a
cloud-compatible way for instances to message each other.
No: discovery of peers, not messaging. After discovery, communication
between nodes will then be done directly e.g. over TCP. Examples of
services that
Russell Bryant wrote:
I'm saying use messaging as the means to implement discovery.
OK. Sorry that I didn't get this before.
1) Marconi isn't widely deployed
Yet.
I think we need to look to the future and decide on the right solution
to the problem.
Agreed 100%. I actually believe
In the nova API, how is server listing with all_tenants supposed to work
with Keystone domains? If I'm an admin of Domain1, I'm probably not an
admin of Domain2. So, presumably all_tenants in a Domain1 project should
list all projects under Domain1, but not those under Domain2.
But I thought
do have a Python client library available for interaction with
Barbican as well: https://github.com/cloudkeep/python-barbicanclient
Thanks,
John
--
*From:* Justin Santa Barbara [jus...@fathomdb.com]
*Sent:* Sunday, September 22, 2013 2:25 PM
17 matches
Mail list logo