Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-09-04 Thread Troy Toman
My apologies. I missed that detail in the rush to get through my inbox this 
morning. I do think we should move off of the launchpad-based list. But, I 
prefer the simpler format of just 'openstack' vs. 'openstack-general'

So, I still would vote for A2 over new option Z.

Troy

On Sep 4, 2012, at 8:50 AM, Thierry Carrez thie...@openstack.org
 wrote:

 Troy Toman wrote:
 I would vote for A2. I don't see a need to replace this list.
 
 You mean you'd just keep it the way it currently is (with the general
 list hosted on launchpad ?) That would be a new option:
 
 Option Z:
 openstack@lists.launchpad.net
 openstack-operat...@lists.openstack.org
 openstack-annou...@lists.openstack.org
 openstack-...@lists.openstack.org
 
 -- 
 Thierry Carrez (ttx)
 Release Manager, OpenStack


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Troy Toman
I hope everyone takes time to read Ryan's note. We all need to keep this in 
mind even more so going forward. Almost all of the required changes can be 
implemented without causing disruption but it won't happen by accident. We try 
and cope with this by absorbing changes in smaller bites (by staying close to 
trunk.) But, that's still challenging and really just a coping strategy - not a 
solution.

I think we can do better.

Troy

On Aug 28, 2012, at 4:26 PM, Ryan Lane rl...@wikimedia.org
 wrote:

 Yesterday I spent the day finally upgrading my nova infrastructure
 from diablo to essex. I've upgraded from bexar to cactus, and cactus
 to diablo, and now diablo to essex. Every single upgrade is becoming
 more and more difficult. It's not getting easier, at all. Here's some
 of the issues I ran into:
 
 1. Glance changed from using image numbers to uuids for images. Nova's
 reference to these weren't updated. There was no automated way to do
 so. I had to map the old values to the new values from glance's
 database then update them in nova.
 
 2. Instance hostnames are changed every single release. In bexar and
 cactus it was the ec2 style id. In diablo it was changed and hardcoded
 to instance-ec2-style-id. In essex it is hardcoded to the instance
 name; the instance's ID is configurable (with a default of
 instance-ec2-style-id, but it only affects the name used in
 virsh/the filesystem. I put a hack into diablo (thanks to Vish for
 that hack) to fix the naming convention as to not break our production
 deployment, but it only affected the hostnames in the database,
 instances in virsh and on the filesystem were still named
 instance-ec2-style-id, so I had to fix all libvirt definitions and
 rename a ton of files to fix this during this upgrade, since our
 naming convention is the ec2-style format. The hostname change still
 affected our deployment, though. It's hardcoded. I decided to simply
 switch hostnames to the instance name in production, since our
 hostnames are required to be unique globally; however, that changes
 how our puppet infrastructure works too, since the certname is by
 default based on fqdn (I changed this to use the ec2-style id). Small
 changes like this have giant rippling effects in infrastructures.
 
 3. There used to be global groups in nova. In keystone there are no
 global groups. This makes performing actions on sets of instances
 across tenants incredibly difficult; for instance, I did an in-place
 ubuntu upgrade from lucid to precise on a compute node, and needed to
 reboot all instances on that host. There's no way to do that without
 database queries fed into a custom script. Also, I have to have a
 management user added to every single tenant and every single
 tenant-role.
 
 4. Keystone's LDAP implementation in stable was broken. It returned no
 roles, many values were hardcoded, etc. The LDAP implementation in
 nova worked, and it looks like its code was simply ignored when auth
 was moved into keystone.
 
 My plea is for the developers to think about how their changes are
 going to affect production deployments when upgrade time comes.
 
 It's fine that glance changed its id structure, but the upgrade should
 have handled that. If a user needs to go into the database in their
 deployment to fix your change, it's broken.
 
 The constant hardcoded hostname changes are totally unacceptable; if
 you change something like this it *must* be configurable, and there
 should be a warning that the default is changing.
 
 The removal of global groups was a major usability killer for users.
 The removal of the global groups wasn't necessarily the problem,
 though. The problem is that there were no alternative management
 methods added. There's currently no reasonable way to manage the
 infrastructure.
 
 I understand that bugs will crop up when a stable branch is released,
 but the LDAP implementation in keystone was missing basic
 functionality. Keystone simply doesn't work without roles. I believe
 this was likely due to the fact that the LDAP backend has basically no
 tests and that Keystone light was rushed in for this release. It's
 imperative that new required services at least handle the
 functionality they are replacing, when released.
 
 That said, excluding the above issues, my upgrade went fairly smoothly
 and this release is *way* more stable and performs *way* better, so
 kudos to the community for that. Keep up the good work!
 
 - Ryan
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Melange RC2 now available

2012-04-02 Thread Troy Toman
The RC2 release for Melange is now available. You can find it at:

https://launchpad.net/melange/essex/essex-rc2

We fixed two issues between RC1 and RC2 that prevented IP blocks from being 
deleted properly and eased the transition for existing Melange DBs that were 
using Nova ID instead of UUID.

Troy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Melange RC1 now available

2012-03-23 Thread Troy Toman
Melange 2012.1 RC1 is now available at:

https://launchpad.net/melange/essex/essex-rc1

The only updates are bug fixes at this point as new work is being directed 
towards a planned merge within the Quantum project in the Folsom timeframe. 
Please report any issues or concerns with this release soon. If none are found, 
we will release this as a final version by April 5th.

Troy Toman

Melange PTL
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Netstack] Interaction between nova and melange : ip fixed not found

2012-02-29 Thread Troy Toman

On Feb 29, 2012, at 11:08 AM, Dan Wendlandt wrote:



2012/2/29 Jérôme Gallard jeronimo...@gmail.commailto:jeronimo...@gmail.com
Hi Jason,

Thank you very much for your answer.
The problem about the wrong ip address is solved now! Perhaps this
octect should be excluded automatically by nova at the network
creation time?

I agree that it seems reasonable to have the default exclude the .0 address.

We did have this discussion and there are use cases where the first address in 
a block would be a useable address. So, we ultimately opted not to make this 
the default. That can always be changed. But, I wanted to let everyone know 
that it was considered.



Regarding the other problem about nova/melange, in fact, I creates all
my networks with the nova-manage command:
nova-manage network create --label=public
--project_id=def761d251814aa8a10a1e268206f02d
--fixed_range_v4=172.16.0.0/24http://172.16.0.0/24 --priority=0 
--gateway=172.16.0.1
But it seems that the nova.fixed_ips table is not well filled.

When using melange, the nova DB is not used to store IP address allocations.  
They are stored in Melange.  We allow network create using nova-manage purely 
for backward compatibility.  The underlying implementation is totally 
different, with Nova effectively acting as a client to proxy calls to Quantum + 
Melange.  Hope that helps.

Dan



Thanks again,
Jérôme

On Tue, Feb 28, 2012 at 16:31, Jason Kölker 
jkoel...@rackspace.commailto:jkoel...@rackspace.com wrote:
 On Tue, 2012-02-28 at 11:52 +0100, Jérôme Gallard wrote:
 Hi all,

 I use the trunk version of Nova, Quantum (with the OVS plugin) and Melange.
 I created networks, everything seems to be right.

 I have two questions :
 - the first VM I boot takes always a wrong IP address (for instance
 172.16.0.0). However, when I boot a second VM, this one takes a good
 IP (for instance 172.16.0.2). Do you know why this can happened ?

 The default melange policy allows assignment of the network address and
 synthesise a gateway address (if it is not specified). It will not hand
 out the gateway address. The fix is to create an ip policy that
 restricts the octect 0. I think the syntax is something like

 `melange policy create -t {tennant} name={block_name}
 desc={policy_name}` (This should return the policy_id for the next
 command)

 `melange unusable_ip_octet create -t {tennant} policy_id={policy_id}
 octect=0`

 `melange ip_block update -t {tennant} id={block_id}
 policy_id={policy_id}`


 - I have an error regarding an fixed IP not found. Effectively, when I
 check the nova database, the fixed_ip table is empty but as I am using
 quantum and melange and their tables seems to be nicely filled. Do you
 have an idea about this issue ?
 This is a copy/paste of the error:
 2012-02-28 10:45:53 DEBUG nova.rpc.common [-] received
 {u'_context_roles': [u'admin'], u'_context_request_id':
 u'req-461788a6-3570-4fa9-8620-6705eb69243c', u│··
 '_context_read_deleted': u'no', u'args': {u'address': u'172.16.0.2'},
 u'_context_auth_token': None, u'_context_strategy': u'noauth',
 u'_context_is_admin': Tr│··
 ue, u'_context_project_id': None, u'_context_timestamp':
 u'2012-02-28T09:45:53.484445', u'_context_user_id': None, u'method':
 u'lease_fixed_ip', u'_context_r│··
 emote_address': None} from (pid=8844) _safe_log
 /usr/local/src/nova/nova/rpc/common.py:144 │··
 2012-02-28 10:45:53 DEBUG nova.rpc.common
 [req-461788a6-3570-4fa9-8620-6705eb69243c None None] unpacked context:
 {'request_id': u'req-461788a6-3570-4fa9-8620│··
 -6705eb69243c', 'user_id': None, 'roles': [u'admin'], 'timestamp':
 '2012-02-28T09:45:53.484445', 'is_admin': True, 'auth_token': None,
 'project_id': None, 'r│··
 emote_address': None, 'read_deleted': u'no', 'strategy': u'noauth'}
 from (pid=8844) unpack_context
 /usr/local/src/nova/nova/rpc/amqp.py:187 │··
 2012-02-28 10:45:53 DEBUG nova.network.manager
 [req-461788a6-3570-4fa9-8620-6705eb69243c None None] Leased IP
 |172.16.0.2| from (pid=8844) lease_fixed_ip
 /us│··r/local/src/nova/nova/network/manager.py:1186 │··
 2012-02-28 10:45:53 ERROR nova.rpc.common [-] Exception during message
 handling │··(nova.rpc.common): TRACE: Traceback (most recent call
 last): │··
 (nova.rpc.common): TRACE: File /usr/local/src/nova/nova/rpc/amqp.py,
 line 250, in _process_data │··(nova.rpc.common): TRACE: rval =
 node_func(context=ctxt, **node_args) │··(nova.rpc.common): TRACE: File
 /usr/local/src/nova/nova/network/manager.py, line 1187, in
 lease_fixed_ip │··(nova.rpc.common): TRACE: fixed_ip =
 self.db.fixed_ip_get_by_address(context, address) │··
 (nova.rpc.common): TRACE: File /usr/local/src/nova/nova/db/api.py,
 line 473, in fixed_ip_get_by_address │··(nova.rpc.common): TRACE:
 return IMPL.fixed_ip_get_by_address(context, address)
 │··(nova.rpc.common): TRACE: File
 /usr/local/src/nova/nova/db/sqlalchemy/api.py, line 119, in wrapper
 │··
 (nova.rpc.common): TRACE: return f(*args, **kwargs)
 │··(nova.rpc.common): TRACE: File
 

[Openstack] Melange Essex-3 Milestone release

2012-01-26 Thread Troy Toman
The Melange Essex-3 milestone was released this morning. You can find details 
at:

https://launchpad.net/melange/essex/essex-3

This release focused on bug fixing, splitting out the python-melangeclient and 
tidying up integration with the Nova Quantum Manager. This was also the first 
release where Melange was part of the OpenStack build process. Thierry, Monty 
and James Blair where a huge help in getting this set up.

For E-4, we will be focused on improving the integration with Nova and Quantum, 
looking at scaling up to support service provider scale allocations and 
improving usability and documentation. I don't expect much in the way of new 
features. But, there may be some API adjustments as we optimize integration 
across systems.

Troy
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] First Melange Milestone Release

2011-12-30 Thread Troy Toman
I am pleased to announce that the first Melange milestone release is now 
available. This release has been vetted with the E2 releases of Quantum and 
Nova. You can find the release tarball along with more information about the 
Melange project at:

http://launchpad.net/melange

Melange is an incubated OpenStack project that is primarily focused on 
providing robust and flexible IP address management for OpenStack services. We 
have also expanded to handle additional network information such as MAC address 
allocation and static route tracking as. Our initial target has been on 
Nova/Quantum integration. Melange can currently be used with Nova through the 
QuantumManager network option. 

Our near term focus is to improve the packaging by separating the Melange 
client code into a separate repo, enhancing our documentation and improving the 
scalability of the system. We are interested in getting more feedback and 
requirements for future development work. So, fire away!

Troy
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Melange/IPAM update and path forward

2011-07-27 Thread Troy Toman
At the Diablo summit, one of the outcomes from the networking discussions was 
the decision to build a separate IP address management system (IPAM). The work 
to deliver on this requirement has been done under the project name Melange. 
Melange is envisioned to provide network information services that can track IP 
blocks, subnets, address allocation and also manage gateway and NAT'ing 
information. Our target has been to encapsulate the required IPAM features into 
a separate service with a RESTful API. This would provide a way to isolate IP 
management and also provide a future, centralized service for other OpenStack 
offerings.

In discussions over the past few weeks with Vish and other Nova core devs, it 
was recommended that we bring the Melange effort into the Nova project. There 
were several reasons feels like right direction. First, there is a great deal 
of refactoring that needs to be done with Nova's current IP management code and 
this approach would prevent doing it twice. The project is small enough that it 
doesn't seem to warrant a full-scale incubated project in FutureStack. Nova 
core devs could oversee the evolution and commits for Melange to ensure 
consistency and adequate coverage for Nova's needs.

Based on this feedback, we are proposing a plan with the following steps:

1. Bring Melange into a subfolder within Nova.  It would not share any code 
with nova, but it does use ‘dev infrastructure’ like run_tests.sh and 
pip-installs of nova. There is no direct coupling between the two, Nova 
interacts with Melange through REST. Melange development stack is similar to 
glance.

2. Integrate Melange into a new NetStack network manager that would exploit 
both Melange and Quantum

3. Use Melange to refactor Nova features such as DHCP, Floating IPs, HA IP 
allocation, Shared IPs

With these ideas in mind, we will be making a merge prop within the next week 
or two to address step 1. There will be ample time to review code and comment 
on the merge prop. But, I wanted to provide and opportunity to discuss this 
approach ahead of time and point you to some resources to find out more about 
Melange so that no one is unprepared.

There is a current Launchpad branch for Melange that can be found here:

https://code.launchpad.net/~raxnetworking/nova/melange

A draft API specification (still needs more formatting work but it's a start!):
http://wiki.openstack.org/MelangeAPIBase

Initial blueprints:

https://blueprints.launchpad.net/nova/+spec/melange-network-info-svcs
https://blueprints.launchpad.net/nova/+spec/melange-ipam
https://blueprints.launchpad.net/nova/+spec/melange-apihttps://blueprints.launchpad.net/nova/+spec/melange-apihttps://blueprints.launchpad.net/nova/+spec/melange-api
https://blueprints.launchpad.net/nova/+spec/melange-address-discovery

Please let me know if there are any other questions that we can field while we 
finalize our current work to prepare the code for a move into Nova.

Thank you,
Troy Toman
This email may include confidential information. If you received it in error, 
please delete it.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Netstack] Questions about Quantum_Framework's extensions

2011-06-09 Thread Troy Toman
Ying,

I can provide some insight since I am in the current timezone. Comments below. 
Perhaps Santhosh or are team can provide some additional thoughts later.

Troy

On Jun 9, 2011, at 4:10 PM, Ying Liu (yinliu2) wrote:

Hi all,

Thanks Santhosh for the great work about Quantum_Framework.

I looked at the code and have some questions.

1.  Extension standard.

In the netstack meeting, we agreed to adopt Openstack's extension standard. 
Jorge is still working on the standard draft. I'm not sure whether this work 
followed old extension mechanism or Jorge's presentation on the summit. One 
extension could be added is attribute extension to existing resources. In our 
use case, we need to create Portprofile and later refer it as a port's 
attribute. With current framework, we can create Portprofile as a resource 
through ResourceExtension. But how does the plug-in know about this extension 
and refer the portprofile when the port is created?

This initial Quantum extensions merge prop essentially ports the current 
extension code that is in Nova and puts it into Quantum. I'm not completely 
sure how the current Nova extension code maps to Jorge's summit presentation 
but I think it should be pretty close. Perhaps Jorge or someone form Team Titan 
can answer that.



   2. Relationship between extension and plug-in.

Extension seems be separated from the plug-in. If we want to extend existing 
plug-in, can we do it through extensions' api? How is extension made aware of 
plug-in’s existence? And how does the plug-in know the new resources created by 
extensions?

Since Nova doesn't have plug-ins in the same way as Quantum, I don't know that 
we've considered that relationship fully. It is a good topic for discussion.


Best,
Ying

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Troy Toman
I think the idea was slightly different. We were equating a vif to  NIC in a 
physical server. A port was equated to a switch port on a physical switch. 
Doesn't necessarily mean they have to be different. But, there was a reason we 
used different terminology.

In particular, we felt the vif was something that would continue to be in the 
server's domain and managed within Nova. A port was a construct that is owned 
and managed by the network service (Quantum).

Troy

On May 23, 2011, at 3:56 PM, Debo Dutta (dedutta) wrote:

Quick question: it seems we are calling one end of the virtual wire a port and 
the other a vif. Is there a reason to do that? Can we just call say that that a 
wire connects 2 ports?

Also another interesting network scenario is when there is a wire connecting 2 
ports and you have a tap (for all sorts of scenarios). I think the semantics of 
the tap might be  quite basic.

Regards
debo

From: 
openstack-bounces+dedutta=cisco@lists.launchpad.netmailto:openstack-bounces+dedutta=cisco@lists.launchpad.net
 [mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On Behalf Of 
Alex Neefus
Sent: Monday, May 23, 2011 1:05 PM
To: Ying Liu (yinliu2); 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] [NetStack] Quantum Service API extension proposal

Hi All –

I wanted to lend support to this proposal, however I don’t think we should be 
so quick to say this whole thing is an extension.

We benefit a lot from having a standard capabilities mechanism as part of our 
core Quantum API. I like Ying’s key value method as well. I think it’s logical, 
clean and scalable. I propose that basic read access of “cap” off of our major 
objects: network, port, interface be included in our first release.

So in summary I would like to encourage us to add:
GET  /networks/{net_id}/conf
GET  /networks/{net_id}/ports/{port_id}/conf/
GET  {entity}/VIF/conf/

Each of these would return a list of keys.

Additionally Quantum base should support
GET  /networks/{net_id}/conf/{key}
GET  /networks/{net_id}/ports/{port_id}/conf/{key}
GET  {entity}/VIF/conf/{key}

Where {key} is the name of either a standard capability or an extention 
capability. We can define an error code now to designate a capability not 
supported by the plugin. (i.e. 472 – CapNotSupported)

Finally we don’t need to standardize on every capability that might be 
supported if we provide this simple mechanism. Specific capabilities Key,Value 
sets can be added later but or included as vendor specific extensions.

I’m happy to add this to the wiki if there is consensus. Rick/Dan – Maybe this 
should be a topic for Tuesdays meeting.

Alex

---
Alex Neefus
Senior System Engineer | Mellanox Technologies
(o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019





From: 
openstack-bounces+alex=mellanox@lists.launchpad.netmailto:openstack-bounces+alex=mellanox@lists.launchpad.net
 [mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On Behalf Of 
Ying Liu (yinliu2)
Sent: Saturday, May 21, 2011 1:10 PM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] [NetStack] Quantum Service API extension proposal

Hi all,

We just posted a proposal for OpenStack Quantum Service API extension on 
community wiki page 
athttp://wiki.openstack.org/QuantumAPIExtensions?action=AttachFiledo=viewtarget=quantum_api_extension.pdf
or
http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFiledo=viewtarget=quantum_api_extension.docx

Please review and let us know your comments/suggestions. An etherpad page is 
created for API extension discussionhttp://etherpad.openstack.org/uWXwqQNU4s

Best,
Ying
___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Discussion of network service flows

2011-05-17 Thread Troy Toman
As was mentioned in the networks meeting this afternoon, we need to open up a 
discussion around flows between Nova, Melange(IPAM), Quantum and Donabe. As we 
are refactoring Nova for networking and designing IP and Network services in 
parallel, it will be important to reach some agreement as to how the REST calls 
flow and who maintains which relationships. 

I've set up an etherpad: 

http://etherpad.openstack.org/network-flows

to host the discussion.

Troy

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Organizing dev work around Melange (IP Address Mgmt Service)

2011-04-28 Thread Troy Toman
If anyone is interested in working on the IP Address Management (IPAM) Service, 
we have some time set up to dig into the nuts-and-bolts of getting this off the 
ground tomorrow. We'll be meeting in the Camino Real room (on the 2nd floor) at 
11:30AM (Friday). 

Also, I propose that we code name the project Melange. It maps well to the 
emerging notice of a mix of responsibilities in the network services space. It 
also is a nice reference point for any Dune fans.

Troy

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-15 Thread Troy Toman

On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:


OK - so it sounds like volumes are going to be in the core API (?) - good.  
Let's get that into the API spec.  It also sounds like extensions (swift / 
glance?) are not going to be in the same API long-term.  So why do we have the 
extensions mechanism?

Until we have an implemented use case (i.e. a patch) that uses the extensions 
element, I don't see how we can spec it out or approve it.  So if you want it 
in v1.1, we better find a team that wants to use it and write code.  If there 
is such a patch, I stand corrected and let's get it reviewed and merged.

I would actually expect that the majority of the use cases that we want in the 
API but don't _want_ to go through core would be more simply addressed by 
well-known metadata (e.g. RAID-5, multi-continent replication, HPC, HIPAA).

I'm don't agree that the lack of a coded patch means we can't discuss an 
extension mechanism. But, if you want a specific use case, we have at least one 
we intend to deliver. It may be more of a one-off than a general case because 
it is required to give us a reasonable transition path from our current 
codebase to Nova. But, it is not an imagined need.

In the Rackspace Cloud Servers 1.0 API, we support a concept of backup 
schedules with a series of API calls to manage them. In drafting the OpenStack 
compute API, this was something that didn't feel generally applicable or useful 
in the core API. So, you don't see it as part of the CORE API spec. That said, 
for transition purposes, we will need a way to provide this capability to our 
customers when we move to Nova. Our current plan is to do this using the 
extension mechanism in the proposed API.

If there is a better way to handle this need, then let's discuss further. But, 
I didn't want the lack of a specific example to squash the idea of extensions.

Troy Toman



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp