Re: [Openstack] Availability of RHEL build of Bexar release of OpenStack Nova

2011-02-28 Thread Ilya Alekseyev
Thierry, we could propose libguestfs patch to trunk, but have concerns with
it. First there is no libguestfs package for ubuntu and libguestfs people
still looking for ubuntu mantainer (http://libguestfs.org/FAQ.html#binaries).
We could create PPA if it will be enough for now, but for next openstack
release official packages of libfuestfs will be necessary.  Next one is
architectural: should we add new flag for choosing NBD/libguestfs?

2011/2/28 Thierry Carrez thie...@openstack.org

 Andrey Brindeyev wrote:
  Grid Dynamics is proud to announce public availability of OpenStack Nova
 RHEL 6.0 build.
  At the moment we have RPMs for Bexar release.
 
  It was tested using KVM hypervisor on real hardware in multi-node mode.
  Here are instructions to install  run our build:
  http://wiki.openstack.org/NovaInstall/RHEL6Notes

 Great work !

  - qcow2 support was enabled utilizing libguestfs instead of missing NBD

 Though almost everyone knows I don't like the injection business, using
 libguestfs instead of NBD sounds like a patch that could be welcome in
 trunk, given that NBD can be a bit difficult (see bug 719325)...

 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Availability of RHEL build of Bexar release of OpenStack Nova

2011-02-28 Thread Soren Hansen
2011/2/28 Thierry Carrez thie...@openstack.org:
 - qcow2 support was enabled utilizing libguestfs instead of missing NBD
 Though almost everyone knows I don't like the injection business, using
 libguestfs instead of NBD sounds like a patch that could be welcome in
 trunk, given that NBD can be a bit difficult (see bug 719325)...

As long as it's optional, it's ok. I'm not particularly fond of libguestfs.
At all. Using a virtual machine to marshall access to disk images is not my
idea of a good time.

-- 
Soren Hansen
Ubuntu Developer    http://www.ubuntu.com/
OpenStack Developer http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Queue Service, next steps

2011-02-28 Thread Eric Day
Hi Raphael,

On Mon, Feb 28, 2011 at 10:01:55AM +, Raphael Cohn wrote:
AMQP Observations
Your comments about AMQP seem to mostly be appropriate for one of the
older versions, eg 0-8, and I don't think they particularly apply to later
versions, eg 1-0. AMQP 0-8 did have some issues that didn't always make it
an optimal choice. For instance:-

I was using the latest version of RabbitMQ and a few different client
APIs and analyzed the protocol exchange. I find it's usually best
to see what the latest software actually being used is doing rather
than a spec that may not yet be implemented. It looks to have been
0.9.1 according to the header. With StormMQ leading the effort on free
clients, this certainly helps, and I'm sure more server implementations
will be popping up.

- Exchanges, etc are no longer part of the spec; queues can be transient,
with configurable transience and timeouts (eg destroy it 10s after the
last message was retrieved)

Ahh, interesting. This is pretty different from previous versions then.

- Configuration is part of the act of sending messages, not separate, eg
open, send to new queue, etc

Ahh, great.

Using HTTP
Whilst you can always put any asynchronous protocol over a synchronous
one, it doesn't always work out too well.  For example, starting on such
an approach means that any 'kernel' will be optimised for 'pulling' from a
queue, when an efficient queue serve handling tens of thousands of
connections needs to be able to 'push' incoming messages, after filtering,
to their destinations. Pushing it all into the HTTP request is a sensible
approach for simple req-response protocols, but it's going to put a heavy
burden onto your queue server.

I would disagree here, a pull-based kernel can still be quite
efficient. Gearman, a different queue protocol/server, is pull based
(basically long-polling) and the server I wrote could easily route
50k fully-synchronous messages/second on a 4-core machine. This
was also without any form of batching optimizations, which will be
part of the OpenStack queue service. The pull operations need to be
designed correctly so they are optimized for high throughput, which
as a result makes it slightly more chatty for idle or low-throughput
connections.

The pull-based kernel also addresses a couple other problems typical
with distributed queue systems. The first is worker readiness. With
workers that may wish to connect to multiple servers for HA (messages
are spread based on hashing or geographic region), you don't want
the worker busy with a message when another server is trying to push
another message to it. If the worker always initiates the receipt
of a message, you can ensure it is able to process the message
immediately. Otherwise a queue server may push a message to a worker
and it could block until the worker is done with the current message
from the other server, delaying the response time. This same issue can
occur with a single server connection when the worker is unresponsive
due to latency, machine being busy, etc.

Another issue a pull-based kernel helps with is affinity for fast
workers and response times. By allowing workers to pull the message
after a long-poll, the fastest workers will be the ones doing the most
work. For HA solutions where you have multiple workers on multiple
servers, this allows the workers to naturally do the most work where
it is most efficient. This can be a very useful form of load balancing
that you get for free with pull-based queues.

Having said all this, it will still be very easy to add push-based
protocols on top of this, so protocols like AMQP should not be
difficult to add on.

RTT: This is almost irrelevant once you decide to use TLS. TLS set up and
tear down, essential for most cloud operations, is far more inefficient
than any protocol it tunnels. And anyone sending messages without
encryption should be shot. It's not acceptable to send other people's data
unsecured anymore (indeed, if it ever was).

I didn't count TCP and/or SSL/TLS RTTs since those will apply to
any protocol when security is a concern (unless you have a way of
preventing replay attacks built into the service). The synchronous
RTTs, regardless of the source, do matter though as these make the
user experience suffer. It sounds like the pipelining in the 1-0 spec
takes care of my concerns though.

201 Created, etc: What happens to you message if your TCP connection dies
mid reply? How do you know if your message was queued? Is there a
reconcillation API?

See: http://wiki.openstack.org/QueueService#Behavior_and_Constraints

Basically, duplicates are possible for a number of reasons, this
being one of them. Workers need to be idempotent.

Of course, some of these concerns could be addressed with HTTP sessions or
cookies, but that's quite nasty to use in most environments.

Yeah, don't want to do there. :)


[Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Justin Santa Barbara
Jay and I have been having an interesting discussion about how to deal with
bugs that mean that unit tests _should_ fail.  So, if I find a bug, I should
write a failing unit test first, and fix it (in one merge).  However, if I
can't fix it, I can't get a failing unit test merged into the trunk (because
it fails).  It may be that I can't get what I'm actually working on merged
with good unit tests until this 'tangential' bug is fixed.

(The discussion is here:
https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227)

I suggested that we introduce a known_bugs collection.  It would have a
set of values to indicate bugs that are known but not yet fixed.  Ideally
these would be linked to bug reports (we could mandate this).  When a
developer wants to write a test or behavior to work around a particular bug,
they can control it based on testing this collection (if 'bug12345' in
known_bugs:)  When someone is ready to fix the bug, they remove the bug
from the collection, the unit tests then fail, they fix the code and commit
with the known_bugs item removed.

This would let people that find bugs but can't or don't want to fix them
still contribute unit tests.  This could be a QA person that can write tests
but not necessarily code the fix.  This could be a developer who simply
isn't familiar with the particular system.  Or it could be where the fix
needs to go through the OpenStack discussion process.  Or it could simply be
a train-of-thought / 'flow' issue.

Take, for example, my favorite OpenStack API authentication issue.  To get a
passing unit test with OpenStack authentication, my best bet is to set all
three values (username, api_key, api_secret) to the same value.  This,
however, is a truly terrible test case.  Having known_bugs marks my unit
test as being suboptimal; it lets me provide better code in the same place
(controlled by the known_bugs setting); and when the bug fixer comes to fix
it they easily get a failing unit test that they can use for TDD.

Jay (correctly) points out that this is complicated; can cause problems down
the line when the bug is fixed in an unexpected way; that known_bugs should
always be empty; and that the right thing to do is to fix the bug or get it
fixed.  I agree, but I don't think that getting the bug fixed before
proceeding is realistic in a project with as many stakeholders as OpenStack
has.

Can we resolve the dilemma?  How should we proceed when we find a bug but
we're working on something different?

Justin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] multi_nic and mac addresses

2011-02-28 Thread Trey Morris
@brian: the problem with a json field is that searching would be really
expensive if we ever need to pull mac addresses from the db to ensure
uniqueness.
@Ilya: If I make a table, I plan on putting mac address, instance id,
network id, and if zones are about ready, some sort of zone information in
the table.

-tr3buchet

On Sat, Feb 26, 2011 at 8:11 AM, Ilya Alekseyev ilyaalekse...@acm.orgwrote:

 I think that new table for NIC mac addresses is good idea, but we need in
 table not only instance id, but network id for interface too. It will be
 good for implementation multi-nic in libvirt for example.



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Dan Prince

I'm not a big fan of 'known bugs' in unit tests. Unit tests should always pass. 
How practical is it that I'm going to invest the time to write a unit tests on 
a bug which I'm then not able to fix in the same merge. In many cases writing 
the test cases are actually harder than writing the code to fix the actual bug.
 
If you really need to write a test case ahead of time (perhaps even to make 
your case that a bug exists) why not just create a launchpad bug and then 
attach the test case as a patch to the bug report?
 
Seems like 'know bugs' also provides a mechanism for potentially stale unit 
tests code to hide out in the nova codebase? If code (including test code) is 
being actively used then I'd actually prefer to not have it be in the codebase.
 

Lastly, QA testers would probably focus more on the functional and integration 
types of testing rather than unit tests right?
 
Dan
 
-Original Message-
From: Justin Santa Barbara jus...@fathomdb.com
Sent: Monday, February 28, 2011 1:56pm
To: openstack@lists.launchpad.net
Subject: [Openstack] How to deal with 'tangential' bugs?


Jay and I have been having an interesting discussion about how to deal with 
bugs that mean that unit tests _should_ fail.  So, if I find a bug, I should 
write a failing unit test first, and fix it (in one merge).  However, if I 
can't fix it, I can't get a failing unit test merged into the trunk (because it 
fails).  It may be that I can't get what I'm actually working on merged with 
good unit tests until this 'tangential' bug is fixed.
(The discussion is 
here: [https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227] 
https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227)
I suggested that we introduce a known_bugs collection.  It would have a set 
of values to indicate bugs that are known but not yet fixed.  Ideally these 
would be linked to bug reports (we could mandate this).  When a developer wants 
to write a test or behavior to work around a particular bug, they can control 
it based on testing this collection (if 'bug12345' in known_bugs:)  When 
someone is ready to fix the bug, they remove the bug from the collection, the 
unit tests then fail, they fix the code and commit with the known_bugs item 
removed.
This would let people that find bugs but can't or don't want to fix them still 
contribute unit tests.  This could be a QA person that can write tests but not 
necessarily code the fix.  This could be a developer who simply isn't familiar 
with the particular system.  Or it could be where the fix needs to go through 
the OpenStack discussion process.  Or it could simply be a train-of-thought / 
'flow' issue.
Take, for example, my favorite OpenStack API authentication issue.  To get a 
passing unit test with OpenStack authentication, my best bet is to set all 
three values (username, api_key, api_secret) to the same value.  This, however, 
is a truly terrible test case.  Having known_bugs marks my unit test as being 
suboptimal; it lets me provide better code in the same place (controlled by the 
known_bugs setting); and when the bug fixer comes to fix it they easily get a 
failing unit test that they can use for TDD.
Jay (correctly) points out that this is complicated; can cause problems down 
the line when the bug is fixed in an unexpected way; that known_bugs should 
always be empty; and that the right thing to do is to fix the bug or get it 
fixed.  I agree, but I don't think that getting the bug fixed before proceeding 
is realistic in a project with as many stakeholders as OpenStack has.
Can we resolve the dilemma?  How should we proceed when we find a bug but we're 
working on something different?
Justin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Clay Gerrard
Unittest2 lets you define a test case that is expected to fail:
http://docs.python.org/library/unittest.html#unittest.expectedFailure

new in 2.7, but it could be possible to backport - or do something similar...

May have issues with nose:
http://code.google.com/p/python-nose/issues/detail?id=325
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Erik Carlin
John -

Are we just talking about compute aspects?  IMO, we should NOT be exposing 
block functionality in the OS compute API.  In Diablo, we will break out block 
into a separate service with it's own OS block API.  That means for now, there 
may be functionality in nova that isn't exposed (an artifact of originally 
mimicing EC2) until we can fully decompose nova into independent services.

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 14:16:20 -0600
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] OpenStack Compute API for Cactus (critical!)

Has anyone done a gap analysis against the proposed OpenStack Compute API and 
a) the implemented code, and b) the EC2 API?

It looks like we have had a breakdown in process, as the community review 
process of the proposed spec has not generated discussion of the missing 
aspects of the proposed spec.


Here is what we said on Feb 3 as the goal for Cactus:



OpenStack Compute API completed. We need to complete a working set of API's 
that are consistent and inclusive of all the exposed functionality.

We need to *very* quickly identify the missing elements that are required in 
the OpenStack Compute API, and then discuss how we mobilize to get this work 
done for Cactus. As this is the #1 priority for this release there are 
implications on milestones dates depending on the results of this exercise. The 
1.1 spec should be complete and expose all current Nova functionality (superset 
of EC2/RS).

Dendrobates, please take the lead on this, anyone who can help please 
coordinate with Rick. Can we get a fairly complete view by EOD tomorrow? Please 
set up a wiki page to identify the gaps, I suggest 3 columns (Actual code / EC2 
/ OpenStack Compute).



Thanks,



John

___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Ewan Mellor
Python 2.7 has @unittest.skip and @unittest.skipUnless decorators.  Is this 
what you want?  You could write the failing unit test, and then mark it as 
skipped until the bug is fixed.  My only concern would be the Python 2.7 
dependency - we're using 2.6 still ourselves, so I'd ask that you wrote some 
backwards-compat code for that.

You could even have skipUnless(datetime.now() - datetime(2011,3,1)  
timedelta(0)), so if someone promised you that they were going to fix it today, 
you could hold them to it!  (I don't recommend that, by the way, but I thought 
that it was fun ;-)

Ewan.

From: openstack-bounces+ewan.mellor=citrix@lists.launchpad.net 
[mailto:openstack-bounces+ewan.mellor=citrix@lists.launchpad.net] On Behalf 
Of Justin Santa Barbara
Sent: 28 February 2011 10:56
To: openstack@lists.launchpad.net
Subject: [Openstack] How to deal with 'tangential' bugs?

Jay and I have been having an interesting discussion about how to deal with 
bugs that mean that unit tests _should_ fail.  So, if I find a bug, I should 
write a failing unit test first, and fix it (in one merge).  However, if I 
can't fix it, I can't get a failing unit test merged into the trunk (because it 
fails).  It may be that I can't get what I'm actually working on merged with 
good unit tests until this 'tangential' bug is fixed.

(The discussion is here: 
https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227)

I suggested that we introduce a known_bugs collection.  It would have a set 
of values to indicate bugs that are known but not yet fixed.  Ideally these 
would be linked to bug reports (we could mandate this).  When a developer wants 
to write a test or behavior to work around a particular bug, they can control 
it based on testing this collection (if 'bug12345' in known_bugs:)  When 
someone is ready to fix the bug, they remove the bug from the collection, the 
unit tests then fail, they fix the code and commit with the known_bugs item 
removed.

This would let people that find bugs but can't or don't want to fix them still 
contribute unit tests.  This could be a QA person that can write tests but not 
necessarily code the fix.  This could be a developer who simply isn't familiar 
with the particular system.  Or it could be where the fix needs to go through 
the OpenStack discussion process.  Or it could simply be a train-of-thought / 
'flow' issue.

Take, for example, my favorite OpenStack API authentication issue.  To get a 
passing unit test with OpenStack authentication, my best bet is to set all 
three values (username, api_key, api_secret) to the same value.  This, however, 
is a truly terrible test case.  Having known_bugs marks my unit test as being 
suboptimal; it lets me provide better code in the same place (controlled by the 
known_bugs setting); and when the bug fixer comes to fix it they easily get a 
failing unit test that they can use for TDD.

Jay (correctly) points out that this is complicated; can cause problems down 
the line when the bug is fixed in an unexpected way; that known_bugs should 
always be empty; and that the right thing to do is to fix the bug or get it 
fixed.  I agree, but I don't think that getting the bug fixed before proceeding 
is realistic in a project with as many stakeholders as OpenStack has.

Can we resolve the dilemma?  How should we proceed when we find a bug but we're 
working on something different?

Justin

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Mark Washenberger
I think it is commendable to identify bugs even if you can't fix them at the 
time. I hope that we don't create incentives to ignore bugs you find during 
development just to get your own merge through.

But I'm worried about staleness and usefulness with known bugs. If the known 
bugs test cases aren't actually running, they are basically just cosmetic. The 
code base can drift from them with absolutely no consequence.

Even if we run the known bug test cases and ensure that they fail, there is a 
lot of room for divergence. Usually a good unit test has a relatively small 
number of ways it can succeed, and a large number of ways it can fail. If we 
are just ensuring failure, what's to prevent the code base from drifting in a 
way that causes the test to fail in a different way?

To fix this last problem, known bugs test cases would actually have to become 
specific passing characterization tests that document undesirable behavior. At 
this point, the tests are no longer really useful when it comes time to fix the 
bug.

So I guess if you have a useful test case for a bug, just stick it in the bug 
report on launchpad if you are unable to fix it in a timely fashion or find 
someone to help you fix it.

Just my 2 cents, thanks.

Dan Prince dan.pri...@rackspace.com said:

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 I'm not a big fan of 'known bugs' in unit tests. Unit tests should always 
 pass.
 How practical is it that I'm going to invest the time to write a unit tests 
 on a
 bug which I'm then not able to fix in the same merge. In many cases writing 
 the
 test cases are actually harder than writing the code to fix the actual bug.
  
 If you really need to write a test case ahead of time (perhaps even to make 
 your
 case that a bug exists) why not just create a launchpad bug and then attach 
 the
 test case as a patch to the bug report?
  
 Seems like 'know bugs' also provides a mechanism for potentially stale unit 
 tests
 code to hide out in the nova codebase? If code (including test code) is being
 actively used then I'd actually prefer to not have it be in the codebase.
  
 
 Lastly, QA testers would probably focus more on the functional and integration
 types of testing rather than unit tests right?
  
 Dan
  
 -Original Message-
 From: Justin Santa Barbara jus...@fathomdb.com
 Sent: Monday, February 28, 2011 1:56pm
 To: openstack@lists.launchpad.net
 Subject: [Openstack] How to deal with 'tangential' bugs?
 
 
 Jay and I have been having an interesting discussion about how to deal with 
 bugs
 that mean that unit tests _should_ fail.  So, if I find a bug, I should write
 a failing unit test first, and fix it (in one merge).  However, if I can't
 fix it, I can't get a failing unit test merged into the trunk (because it 
 fails).
  It may be that I can't get what I'm actually working on merged with good
 unit tests until this 'tangential' bug is fixed.
 (The discussion is
 here: [https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227]
 https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227)
 I suggested that we introduce a known_bugs collection.  It would have a set
 of values to indicate bugs that are known but not yet fixed.  Ideally these
 would be linked to bug reports (we could mandate this).  When a developer
 wants to write a test or behavior to work around a particular bug, they can
 control it based on testing this collection (if 'bug12345' in known_bugs:)
  When someone is ready to fix the bug, they remove the bug from the
 collection, the unit tests then fail, they fix the code and commit with the
 known_bugs item removed.
 This would let people that find bugs but can't or don't want to fix them still
 contribute unit tests.  This could be a QA person that can write tests but
 not necessarily code the fix.  This could be a developer who simply isn't
 familiar with the particular system.  Or it could be where the fix needs to
 go through the OpenStack discussion process.  Or it could simply be a
 train-of-thought / 'flow' issue.
 Take, for example, my favorite OpenStack API authentication issue.  To get a
 passing unit test with OpenStack authentication, my best bet is to set all 
 three
 values (username, api_key, api_secret) to the same value.  This, however, is
 a truly terrible test case.  Having known_bugs marks my unit test as being
 suboptimal; it lets me provide better code in the same place (controlled by 
 the
 known_bugs setting); and when the bug fixer comes to fix it they easily get a
 failing unit test that they can use for TDD.
 Jay (correctly) points out that this is complicated; can cause problems down 
 the
 line when the bug is fixed in an unexpected way; that known_bugs should 
 always be
 empty; and that the right thing to do is 

[Openstack] server affinity

2011-02-28 Thread Gabe Westmaas
Hey All,

For various reasons, Rackspace has a need to allow customers to request 
placement in the same zone as another server.  I am trying to figure out if 
this is generically useful, or something that should be outside of core.  The 
idea is that if you don't specify an affinity ID one will get returned to you 
when you create the server, and you can use that ID to add additional servers 
in close proximity to the first.

What do you think?  Is this useful enough outside Rackspace to be in core?  
Alternatively, we can write it as an extension so as not to clutter core.

Gabe


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Tim Simpson
The Skip plugin for nose offers similar functionality which can be used in 
Python 2.6: 
http://somethingaboutorange.com/mrl/projects/nose/0.11.1/plugins/skip.html

Using this you can write decorators that raise SkipTest if a certain criteria 
isn't met.


From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on behalf of 
Clay Gerrard [clay.gerr...@rackspace.com]
Sent: Monday, February 28, 2011 3:09 PM
To: Dan Prince; Justin Santa Barbara
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] How to deal with 'tangential' bugs?

Unittest2 lets you define a test case that is expected to fail:
http://docs.python.org/library/unittest.html#unittest.expectedFailure

new in 2.7, but it could be possible to backport - or do something similar...

May have issues with nose:
http://code.google.com/p/python-nose/issues/detail?id=325
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-02-28 Thread Eric Day
Hi Gabe,

There has been a lot of discussion about this, along with zone naming,
structure, and so forth. I was propsing we not only make it part of
Nova, but suggest all projects use the same locality zone names/tags
to ensure cross-project locality.

So, yes, and don't make it nova-specific. :)

-Eric

On Mon, Feb 28, 2011 at 04:48:25PM -0500, Gabe Westmaas wrote:
 Hey All,
 
 For various reasons, Rackspace has a need to allow customers to request 
 placement in the same zone as another server.  I am trying to figure out if 
 this is generically useful, or something that should be outside of core.  The 
 idea is that if you don't specify an affinity ID one will get returned to you 
 when you create the server, and you can use that ID to add additional servers 
 in close proximity to the first.
 
 What do you think?  Is this useful enough outside Rackspace to be in core?  
 Alternatively, we can write it as an extension so as not to clutter core.
 
 Gabe
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread John Purrier
Hi Erik, today we have compute, block/volume, and network all encompassed in
nova. Along with image and object storage these make the whole of OpenStack
today. The goal is to see where we are at wrt the OpenStack API
(compute/network/volume/image) and coverage of the underlying implementation
as well as what is available through the EC2 API today.

 

I would propose that volume and network API's be exposed not through the
core compute API, but as extensions. Once we create separate services and
factor network and volume services out of nova these API's will form the
core API's for these services. We may also need to up-version these service
API's between Cactus and Diablo as they are currently under heavy discussion
and design.

 

John

 

From: Erik Carlin [mailto:erik.car...@rackspace.com] 
Sent: Monday, February 28, 2011 3:16 PM
To: John Purrier; openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

 

John -

 

Are we just talking about compute aspects?  IMO, we should NOT be exposing
block functionality in the OS compute API.  In Diablo, we will break out
block into a separate service with it's own OS block API.  That means for
now, there may be functionality in nova that isn't exposed (an artifact of
originally mimicing EC2) until we can fully decompose nova into independent
services.

 

Erik   

 

From: John Purrier j...@openstack.org
Date: Mon, 28 Feb 2011 14:16:20 -0600
To: openstack@lists.launchpad.net
Subject: [Openstack] OpenStack Compute API for Cactus (critical!)

 

Has anyone done a gap analysis against the proposed OpenStack Compute API
and a) the implemented code, and b) the EC2 API?

 

It looks like we have had a breakdown in process, as the community review
process of the proposed spec has not generated discussion of the missing
aspects of the proposed spec. 

 

Here is what we said on Feb 3 as the goal for Cactus:

 

OpenStack Compute API completed. We need to complete a working set of API's
that are consistent and inclusive of all the exposed functionality.

 

We need to *very* quickly identify the missing elements that are required in
the OpenStack Compute API, and then discuss how we mobilize to get this work
done for Cactus. As this is the #1 priority for this release there are
implications on milestones dates depending on the results of this exercise.
The 1.1 spec should be complete and expose all current Nova functionality
(superset of EC2/RS). 

 

Dendrobates, please take the lead on this, anyone who can help please
coordinate with Rick. Can we get a fairly complete view by EOD tomorrow?
Please set up a wiki page to identify the gaps, I suggest 3 columns (Actual
code / EC2 / OpenStack Compute).

 

Thanks,

 

John

___ Mailing list:
https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack More help :
https://help.launchpad.net/ListHelp 

 
Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of
the
individual or entity to which this message is addressed, and unless
otherwise
expressly indicated, is confidential and privileged information of
Rackspace.
Any dissemination, distribution or copying of the enclosed material is
prohibited.
If you receive this transmission in error, please notify us immediately by
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-02-28 Thread Vishvananda Ishaya
This seems to overlap heavily with justin's metadata stuff.  The idea was that 
you could pass in metadata on instance launch saying near: other-object.  I 
think that is far more useful than an opaque affinity id.

Vish

On Feb 28, 2011, at 2:53 PM, Gabe Westmaas wrote:

 Hi Eric,
 
 I probably chose a poor word there, this is actually referring to something 
 smaller than the multicluster zones that Sandy has been working on.  For 
 example, in case for some performance reasons you wanted two servers with as 
 few network hops as possible.  If that still lines up with what you are 
 talking about, great.
 
 Sorry about that!
 
 Gabe
 
 On Monday, February 28, 2011 4:57pm, Eric Day e...@oddments.org said:
 
 Hi Gabe,
 
 There has been a lot of discussion about this, along with zone naming,
 structure, and so forth. I was propsing we not only make it part of
 Nova, but suggest all projects use the same locality zone names/tags
 to ensure cross-project locality.
 
 So, yes, and don't make it nova-specific. :)
 
 -Eric
 
 On Mon, Feb 28, 2011 at 04:48:25PM -0500, Gabe Westmaas wrote:
 Hey All,
 
 For various reasons, Rackspace has a need to allow customers to request 
 placement
 in the same zone as another server.  I am trying to figure out if this is
 generically useful, or something that should be outside of core.  The idea 
 is
 that if you don't specify an affinity ID one will get returned to you when 
 you
 create the server, and you can use that ID to add additional servers in 
 close
 proximity to the first.
 
 What do you think?  Is this useful enough outside Rackspace to be in core? 
 Alternatively, we can write it as an extension so as not to clutter core.
 
 Gabe
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Vishvananda Ishaya
We definitely wil need to be able to create volumes at the very least without 
using ec2.  Justinsb has some prototype code available for this.

Vish

On Feb 28, 2011, at 2:53 PM, John Purrier wrote:

 Hi Erik, today we have compute, block/volume, and network all encompassed in 
 nova. Along with image and object storage these make the whole of OpenStack 
 today. The goal is to see where we are at wrt the OpenStack API 
 (compute/network/volume/image) and coverage of the underlying implementation 
 as well as what is available through the EC2 API today.
  
 I would propose that volume and network API’s be exposed not through the core 
 compute API, but as extensions. Once we create separate services and factor 
 network and volume services out of nova these API’s will form the core API’s 
 for these services. We may also need to up-version these service API’s 
 between Cactus and Diablo as they are currently under heavy discussion and 
 design.
  
 John
  
 From: Erik Carlin [mailto:erik.car...@rackspace.com] 
 Sent: Monday, February 28, 2011 3:16 PM
 To: John Purrier; openstack@lists.launchpad.net
 Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)
  
 John -
  
 Are we just talking about compute aspects?  IMO, we should NOT be exposing 
 block functionality in the OS compute API.  In Diablo, we will break out 
 block into a separate service with it's own OS block API.  That means for 
 now, there may be functionality in nova that isn't exposed (an artifact of 
 originally mimicing EC2) until we can fully decompose nova into independent 
 services.
  
 Erik   
  
 From: John Purrier j...@openstack.org
 Date: Mon, 28 Feb 2011 14:16:20 -0600
 To: openstack@lists.launchpad.net
 Subject: [Openstack] OpenStack Compute API for Cactus (critical!)
  
 Has anyone done a gap analysis against the proposed OpenStack Compute API and 
 a) the implemented code, and b) the EC2 API?
  
 It looks like we have had a breakdown in process, as the community review 
 process of the proposed spec has not generated discussion of the missing 
 aspects of the proposed spec.
  
 Here is what we said on Feb 3 as the goal for Cactus:
  
 OpenStack Compute API completed. We need to complete a working set of API's 
 that are consistent and inclusive of all the exposed functionality.
  
 We need to *very* quickly identify the missing elements that are required in 
 the OpenStack Compute API, and then discuss how we mobilize to get this work 
 done for Cactus. As this is the #1 priority for this release there are 
 implications on milestones dates depending on the results of this exercise. 
 The 1.1 spec should be complete and expose all current Nova functionality 
 (superset of EC2/RS).
  
 Dendrobates, please take the lead on this, anyone who can help please 
 coordinate with Rick. Can we get a fairly complete view by EOD tomorrow? 
 Please set up a wiki page to identify the gaps, I suggest 3 columns (Actual 
 code / EC2 / OpenStack Compute).
  
 Thanks,
  
 John
 ___ Mailing list: 
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
 Unsubscribe : https://launchpad.net/~openstack More help : 
 https://help.launchpad.net/ListHelp
  
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use of the
 individual or entity to which this message is addressed, and unless otherwise
 expressly indicated, is confidential and privileged information of Rackspace.
 Any dissemination, distribution or copying of the enclosed material is 
 prohibited.
 If you receive this transmission in error, please notify us immediately by 
 e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation is appreciated.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Erik Carlin
That all sounds good.  My only question is around images.  Is glance ready to 
be an independent service (and thus have a separate API) in Cactus?

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)

Hi Erik, today we have compute, block/volume, and network all encompassed in 
nova. Along with image and object storage these make the whole of OpenStack 
today. The goal is to see where we are at wrt the OpenStack API 
(compute/network/volume/image) and coverage of the underlying implementation as 
well as what is available through the EC2 API today.

I would propose that volume and network API’s be exposed not through the core 
compute API, but as extensions. Once we create separate services and factor 
network and volume services out of nova these API’s will form the core API’s 
for these services. We may also need to up-version these service API’s between 
Cactus and Diablo as they are currently under heavy discussion and design.

John

From: Erik Carlin [mailto:erik.car...@rackspace.com]
Sent: Monday, February 28, 2011 3:16 PM
To: John Purrier; 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

John -

Are we just talking about compute aspects?  IMO, we should NOT be exposing 
block functionality in the OS compute API.  In Diablo, we will break out block 
into a separate service with it's own OS block API.  That means for now, there 
may be functionality in nova that isn't exposed (an artifact of originally 
mimicing EC2) until we can fully decompose nova into independent services.

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 14:16:20 -0600
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] OpenStack Compute API for Cactus (critical!)

Has anyone done a gap analysis against the proposed OpenStack Compute API and 
a) the implemented code, and b) the EC2 API?

It looks like we have had a breakdown in process, as the community review 
process of the proposed spec has not generated discussion of the missing 
aspects of the proposed spec.


Here is what we said on Feb 3 as the goal for Cactus:



OpenStack Compute API completed. We need to complete a working set of API's 
that are consistent and inclusive of all the exposed functionality.

We need to *very* quickly identify the missing elements that are required in 
the OpenStack Compute API, and then discuss how we mobilize to get this work 
done for Cactus. As this is the #1 priority for this release there are 
implications on milestones dates depending on the results of this exercise. The 
1.1 spec should be complete and expose all current Nova functionality (superset 
of EC2/RS).

Dendrobates, please take the lead on this, anyone who can help please 
coordinate with Rick. Can we get a fairly complete view by EOD tomorrow? Please 
set up a wiki page to identify the gaps, I suggest 3 columns (Actual code / EC2 
/ OpenStack Compute).



Thanks,



John
___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or

embedded documents) is intended for the exclusive and confidential use of the

individual or entity to which this message is addressed, and unless otherwise

expressly indicated, is confidential and privileged information of Rackspace.

Any dissemination, distribution or copying of the enclosed material is 
prohibited.

If you receive this transmission in error, please notify us immediately by 
e-mail

at ab...@rackspace.commailto:ab...@rackspace.com, and delete the original 
message.

Your cooperation is appreciated.


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-02-28 Thread Justin Santa Barbara
Yes - the use case I'm working towards is to use metadata to specify
openstack:near=volume-01 when creating a machine, and I will provide a
scheduler that will take that information and will assign you a machine e.g.
in the same rack as the volume storage.  It's unclear right now whether this
metadata approach should be core OpenStack or not, but I figure I'll
contribute it and then we can debate exactly where we want to put it.

I see this as complementary to Eric's proposal, which also makes sense to
me.  Hopefully my code will be re-usable here also (or if Eric commits
first, hopefully I can use his!)

Gabe: Can you give us more details on your use cases?  Would my proposal
work for you?  Would Eric's?  Any caveats with either?

Justin



On Mon, Feb 28, 2011 at 3:01 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 This seems to overlap heavily with justin's metadata stuff.  The idea was
 that you could pass in metadata on instance launch saying near:
 other-object.  I think that is far more useful than an opaque affinity id.

 Vish

 On Feb 28, 2011, at 2:53 PM, Gabe Westmaas wrote:

  Hi Eric,
 
  I probably chose a poor word there, this is actually referring to
 something smaller than the multicluster zones that Sandy has been working
 on.  For example, in case for some performance reasons you wanted two
 servers with as few network hops as possible.  If that still lines up with
 what you are talking about, great.
 
  Sorry about that!
 
  Gabe
 
  On Monday, February 28, 2011 4:57pm, Eric Day e...@oddments.org
 said:
 
  Hi Gabe,
 
  There has been a lot of discussion about this, along with zone naming,
  structure, and so forth. I was propsing we not only make it part of
  Nova, but suggest all projects use the same locality zone names/tags
  to ensure cross-project locality.
 
  So, yes, and don't make it nova-specific. :)
 
  -Eric
 
  On Mon, Feb 28, 2011 at 04:48:25PM -0500, Gabe Westmaas wrote:
  Hey All,
 
  For various reasons, Rackspace has a need to allow customers to request
 placement
  in the same zone as another server.  I am trying to figure out if this
 is
  generically useful, or something that should be outside of core.  The
 idea is
  that if you don't specify an affinity ID one will get returned to you
 when you
  create the server, and you can use that ID to add additional servers in
 close
  proximity to the first.
 
  What do you think?  Is this useful enough outside Rackspace to be in
 core?
  Alternatively, we can write it as an extension so as not to clutter
 core.
 
  Gabe
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-02-28 Thread Eric Day
Yup, Sandy's zone stuff, Justin's metadata stuff, and this is all
pretty much the same (or at least very closely related). First off,
lets move away from the term zone and define location as an arbitrary
grouping of one or more resources, not the traditional availability
zones. Thinking in terms of hierarchical locations, a root location
could be the endpoint for an entire public service. A leaf location
could be a single compute host (or possible even an instance on that
host). This location tag can be reused across OpenStack services. A
zone defines a location as well, but locations can be much more
granular.

When a request comes in that can take into account locality, you can
say in location X, near location X, not in location X, etc. It
should be deployment specific what near is, and other relationship
terms between locations may need to be defined. We also need to keep
into consideration that some deployments want to keep topology opaque,
so some deployments may need a location-lookup service that translates
arbitrary locations to meaningful ones.

In a previous proposal I suggested we use DNS names, but folks said
it should stay pluggable to any arbitrary string. We can default to
using DNS names for simplicity though.

As an example, we could have:

Zones:
  example.com
  east.example.com
  west.example.com
Locations: all zones above plus +
  dc1.east.example.com
rack1.dc1.east.example.com
  compute1.rack1.dc1.east.example.com
instance1.compute1.rack1.dc1.east.example.com
...
  compute2.rack1.dc1.east.example.com
...
  ...
rack2.dc1.east.example.com
...
  dc2.east.example.com
  ...
  dc1.west.example.com
  ...

As mentioned before, all location names are subject to change at
anytime (rebalancing, failover, ...), so if you want to use them
to place objects near one another, you should always lookup the
location for the object when you make the request. A service may also
allow shortcuts, so you can say launch an instance near this other
instance, which internally will resolve locations before performing
the request. Cross-project shortcuts are probably not worth it,
as then every service would need to know how to resolve objects for
every other.

So, with all this, we simply need a piece of metadata (auto-generated
and updated by service) named 'location' on all objects, and various
service schedulers will know how to operate on these with some set of
operations. Same may be required by the service (ie, putting volumes
on the same networks as the instance for example), but others can
be user-defined.

-Eric

On Mon, Feb 28, 2011 at 03:01:23PM -0800, Vishvananda Ishaya wrote:
 This seems to overlap heavily with justin's metadata stuff.  The idea was 
 that you could pass in metadata on instance launch saying near: other-object. 
  I think that is far more useful than an opaque affinity id.
 
 Vish
 
 On Feb 28, 2011, at 2:53 PM, Gabe Westmaas wrote:
 
  Hi Eric,
  
  I probably chose a poor word there, this is actually referring to something 
  smaller than the multicluster zones that Sandy has been working on.  For 
  example, in case for some performance reasons you wanted two servers with 
  as few network hops as possible.  If that still lines up with what you are 
  talking about, great.
  
  Sorry about that!
  
  Gabe
  
  On Monday, February 28, 2011 4:57pm, Eric Day e...@oddments.org said:
  
  Hi Gabe,
  
  There has been a lot of discussion about this, along with zone naming,
  structure, and so forth. I was propsing we not only make it part of
  Nova, but suggest all projects use the same locality zone names/tags
  to ensure cross-project locality.
  
  So, yes, and don't make it nova-specific. :)
  
  -Eric
  
  On Mon, Feb 28, 2011 at 04:48:25PM -0500, Gabe Westmaas wrote:
  Hey All,
  
  For various reasons, Rackspace has a need to allow customers to request 
  placement
  in the same zone as another server.  I am trying to figure out if this is
  generically useful, or something that should be outside of core.  The 
  idea is
  that if you don't specify an affinity ID one will get returned to you 
  when you
  create the server, and you can use that ID to add additional servers in 
  close
  proximity to the first.
  
  What do you think?  Is this useful enough outside Rackspace to be in 
  core? 
  Alternatively, we can write it as an extension so as not to clutter core.
  
  Gabe
  
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
  
  
  
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 

Re: [Openstack] server affinity

2011-02-28 Thread Justin Santa Barbara
It's an open question whether 'meaningful tags' are treated as metadata with
a system-reserved prefix (e.g. openstack:), or whether they end up in a
separate area of the API.  The aws: prefix is already reserved by AWS in
their API, so we'll probably need to reserve it in ours as well or face
future incompatibility.

I'm in favor of the 'openstack:' prefix for simplicity.

I do agree that 'image type' could be one of these 'meaningful tags' also,
except for legacy-compatibility reasons.  Irrespective of the API, I think
it's nice to think about things this way.

Justin



On Mon, Feb 28, 2011 at 3:49 PM, Brian Lamar brian.la...@rackspace.comwrote:

 Just because I can't help but asking, when does data specified during
 instance creation stop being data and start being metadata? While it seems
 like a silly question I'm wrestling with the idea of metadata actually
 *doing* something.

 I was under the (perhaps false) impression that metadata could be added by
 end-users and was a way to describe associated data about an object which
 didn't impact it's being. For example, we don't set the image type with
 metadata, instances are created by providing an image type. Perhaps the two
 aren't analogous because if openstack:near changes the instance would
 migrate to another location? Or if volume-01 was moved, does the
 instance move too?

 -Brian



 -Original Message-
 From: Justin Santa Barbara jus...@fathomdb.com
 Sent: Monday, February 28, 2011 6:28pm
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] server affinity

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 Yes - the use case I'm working towards is to use metadata to specify
 openstack:near=volume-01 when creating a machine, and I will provide
 a
 scheduler that will take that information and will assign you a machine
 e.g.
 in the same rack as the volume storage.  It's unclear right now whether
 this
 metadata approach should be core OpenStack or not, but I figure I'll
 contribute it and then we can debate exactly where we want to put it.

 I see this as complementary to Eric's proposal, which also makes sense to
 me.  Hopefully my code will be re-usable here also (or if Eric commits
 first, hopefully I can use his!)

 Gabe: Can you give us more details on your use cases?  Would my proposal
 work for you?  Would Eric's?  Any caveats with either?

 Justin



 On Mon, Feb 28, 2011 at 3:01 PM, Vishvananda Ishaya
 vishvana...@gmail.comwrote:

  This seems to overlap heavily with justin's metadata stuff.  The idea was
  that you could pass in metadata on instance launch saying near:
  other-object.  I think that is far more useful than an opaque affinity
 id.
 
  Vish
 
  On Feb 28, 2011, at 2:53 PM, Gabe Westmaas wrote:
 
   Hi Eric,
  
   I probably chose a poor word there, this is actually referring to
  something smaller than the multicluster zones that Sandy has been working
  on.  For example, in case for some performance reasons you wanted two
  servers with as few network hops as possible.  If that still lines up
 with
  what you are talking about, great.
  
   Sorry about that!
  
   Gabe
  
   On Monday, February 28, 2011 4:57pm, Eric Day e...@oddments.org
  said:
  
   Hi Gabe,
  
   There has been a lot of discussion about this, along with zone naming,
   structure, and so forth. I was propsing we not only make it part of
   Nova, but suggest all projects use the same locality zone names/tags
   to ensure cross-project locality.
  
   So, yes, and don't make it nova-specific. :)
  
   -Eric
  
   On Mon, Feb 28, 2011 at 04:48:25PM -0500, Gabe Westmaas wrote:
   Hey All,
  
   For various reasons, Rackspace has a need to allow customers to
 request
  placement
   in the same zone as another server.  I am trying to figure out if
 this
  is
   generically useful, or something that should be outside of core.  The
  idea is
   that if you don't specify an affinity ID one will get returned to you
  when you
   create the server, and you can use that ID to add additional servers
 in
  close
   proximity to the first.
  
   What do you think?  Is this useful enough outside Rackspace to be in
  core?
   Alternatively, we can write it as an extension so as not to clutter
  core.
  
   Gabe
  
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
  
  
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
 
 
  

Re: [Openstack] server affinity

2011-02-28 Thread Brian Lamar
Interesting, I guess I just don't see the point of introducing additional 
complexities for gain I don't yet see. My example about 'image type' was meant 
to act as a deterrent against using metadata for OpenStack meaningful values. 

Instances, in my opinion, should be created explicitly with properties such as 
name, image type, size, affinity group, etc. because all of this data is of the 
same fiber...that is to say unless there is an explicit functional difference 
between how the properties behave, they should be defined in the same place.

Is this the sort of data that is stored in AWS's instance metadata? I haven't 
extensively used their service so I'm not familiar with how they distinguish 
between the function of a property defined at creation and metadata (aws:) 
properties.

Brian

-Original Message-
From: Justin Santa Barbara jus...@fathomdb.com
Sent: Monday, February 28, 2011 6:59pm
To: Brian Lamar brian.la...@rackspace.com
Subject: Re: [Openstack] server affinity

It's an open question whether 'meaningful tags' are treated as metadata with
a system-reserved prefix (e.g. openstack:), or whether they end up in a
separate area of the API.  The aws: prefix is already reserved by AWS in
their API, so we'll probably need to reserve it in ours as well or face
future incompatibility.

I'm in favor of the 'openstack:' prefix for simplicity.

I do agree that 'image type' could be one of these 'meaningful tags' also,
except for legacy-compatibility reasons.  Irrespective of the API, I think
it's nice to think about things this way.

Justin



On Mon, Feb 28, 2011 at 3:49 PM, Brian Lamar brian.la...@rackspace.comwrote:

 Just because I can't help but asking, when does data specified during
 instance creation stop being data and start being metadata? While it seems
 like a silly question I'm wrestling with the idea of metadata actually
 *doing* something.

 I was under the (perhaps false) impression that metadata could be added by
 end-users and was a way to describe associated data about an object which
 didn't impact it's being. For example, we don't set the image type with
 metadata, instances are created by providing an image type. Perhaps the two
 aren't analogous because if openstack:near changes the instance would
 migrate to another location? Or if volume-01 was moved, does the
 instance move too?

 -Brian



 -Original Message-
 From: Justin Santa Barbara jus...@fathomdb.com
 Sent: Monday, February 28, 2011 6:28pm
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] server affinity

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 Yes - the use case I'm working towards is to use metadata to specify
 openstack:near=volume-01 when creating a machine, and I will provide
 a
 scheduler that will take that information and will assign you a machine
 e.g.
 in the same rack as the volume storage.  It's unclear right now whether
 this
 metadata approach should be core OpenStack or not, but I figure I'll
 contribute it and then we can debate exactly where we want to put it.

 I see this as complementary to Eric's proposal, which also makes sense to
 me.  Hopefully my code will be re-usable here also (or if Eric commits
 first, hopefully I can use his!)

 Gabe: Can you give us more details on your use cases?  Would my proposal
 work for you?  Would Eric's?  Any caveats with either?

 Justin



 On Mon, Feb 28, 2011 at 3:01 PM, Vishvananda Ishaya
 vishvana...@gmail.comwrote:

  This seems to overlap heavily with justin's metadata stuff.  The idea was
  that you could pass in metadata on instance launch saying near:
  other-object.  I think that is far more useful than an opaque affinity
 id.
 
  Vish
 
  On Feb 28, 2011, at 2:53 PM, Gabe Westmaas wrote:
 
   Hi Eric,
  
   I probably chose a poor word there, this is actually referring to
  something smaller than the multicluster zones that Sandy has been working
  on.  For example, in case for some performance reasons you wanted two
  servers with as few network hops as possible.  If that still lines up
 with
  what you are talking about, great.
  
   Sorry about that!
  
   Gabe
  
   On Monday, February 28, 2011 4:57pm, Eric Day e...@oddments.org
  said:
  
   Hi Gabe,
  
   There has been a lot of discussion about this, along with zone naming,
   structure, and so forth. I was propsing we not only make it part of
   Nova, but suggest all projects use the same locality zone names/tags
   to ensure cross-project locality.
  
   So, yes, and don't make it nova-specific. :)
  
   -Eric
  
   On Mon, Feb 28, 2011 at 04:48:25PM -0500, Gabe Westmaas wrote:
   Hey All,
  
   For various reasons, Rackspace has a need to allow customers to
 request
  placement
   in the same zone as another server.  I am trying to figure out if
 this
  is
   

Re: [Openstack] server affinity

2011-02-28 Thread Justin Santa Barbara

 Interesting, I guess I just don't see the point of introducing additional
 complexities for gain I don't yet see.


We can defer discussion until the patch lands, when you can see the gains
(or not!) :-)


 My example about 'image type' was meant to act as a deterrent against using
 metadata for OpenStack meaningful values.  Instances, in my opinion,
 should be created explicitly with properties such as name, image type, size,
 affinity group, etc. because all of this data is of the same fiber...that is
 to say unless there is an explicit functional difference between how the
 properties behave, they should be defined in the same place.


I agree - they should all be in the metadata area :-)  Sadly, AWS screwed
that up.


 Is this the sort of data that is stored in AWS's instance metadata? I
 haven't extensively used their service so I'm not familiar with how they
 distinguish between the function of a property defined at creation and
 metadata (aws:) properties.


I don't know how/whether AWS actually uses their reserved prefix publicly at
the moment (?).  However, we have no choice but to reserve the aws: prefix
or be incompatible down the road.  And if aws has a prefix in our API,
probably openstack shouldn't be the only API without a prefix, otherwise
it would be unhAPI.

Justin


 -Original Message-
 From: Justin Santa Barbara jus...@fathomdb.com
 Sent: Monday, February 28, 2011 6:59pm
 To: Brian Lamar brian.la...@rackspace.com
 Subject: Re: [Openstack] server affinity

 It's an open question whether 'meaningful tags' are treated as metadata
 with
 a system-reserved prefix (e.g. openstack:), or whether they end up in a
 separate area of the API.  The aws: prefix is already reserved by AWS in
 their API, so we'll probably need to reserve it in ours as well or face
 future incompatibility.

 I'm in favor of the 'openstack:' prefix for simplicity.

 I do agree that 'image type' could be one of these 'meaningful tags' also,
 except for legacy-compatibility reasons.  Irrespective of the API, I think
 it's nice to think about things this way.

 Justin



 On Mon, Feb 28, 2011 at 3:49 PM, Brian Lamar brian.la...@rackspace.com
 wrote:

  Just because I can't help but asking, when does data specified during
  instance creation stop being data and start being metadata? While it
 seems
  like a silly question I'm wrestling with the idea of metadata actually
  *doing* something.
 
  I was under the (perhaps false) impression that metadata could be added
 by
  end-users and was a way to describe associated data about an object which
  didn't impact it's being. For example, we don't set the image type with
  metadata, instances are created by providing an image type. Perhaps the
 two
  aren't analogous because if openstack:near changes the instance would
  migrate to another location? Or if volume-01 was moved, does the
  instance move too?
 
  -Brian
 
 
 
  -Original Message-
  From: Justin Santa Barbara jus...@fathomdb.com
  Sent: Monday, February 28, 2011 6:28pm
  To: openstack@lists.launchpad.net
  Subject: Re: [Openstack] server affinity
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
  Yes - the use case I'm working towards is to use metadata to specify
  openstack:near=volume-01 when creating a machine, and I will
 provide
  a
  scheduler that will take that information and will assign you a machine
  e.g.
  in the same rack as the volume storage.  It's unclear right now whether
  this
  metadata approach should be core OpenStack or not, but I figure I'll
  contribute it and then we can debate exactly where we want to put it.
 
  I see this as complementary to Eric's proposal, which also makes sense to
  me.  Hopefully my code will be re-usable here also (or if Eric commits
  first, hopefully I can use his!)
 
  Gabe: Can you give us more details on your use cases?  Would my proposal
  work for you?  Would Eric's?  Any caveats with either?
 
  Justin
 
 
 
  On Mon, Feb 28, 2011 at 3:01 PM, Vishvananda Ishaya
  vishvana...@gmail.comwrote:
 
   This seems to overlap heavily with justin's metadata stuff.  The idea
 was
   that you could pass in metadata on instance launch saying near:
   other-object.  I think that is far more useful than an opaque affinity
  id.
  
   Vish
  
   On Feb 28, 2011, at 2:53 PM, Gabe Westmaas wrote:
  
Hi Eric,
   
I probably chose a poor word there, this is actually referring to
   something smaller than the multicluster zones that Sandy has been
 working
   on.  For example, in case for some performance reasons you wanted two
   servers with as few network hops as possible.  If that still lines up
  with
   what you are talking about, great.
   
Sorry about that!
   
Gabe
   
On Monday, February 28, 2011 4:57pm, Eric Day 

Re: [Openstack] server affinity

2011-02-28 Thread Jay Pipes
On Mon, Feb 28, 2011 at 6:49 PM, Brian Lamar brian.la...@rackspace.com wrote:
 Just because I can't help but asking, when does data specified during 
 instance creation stop being data and start being metadata? While it seems 
 like a silly question I'm wrestling with the idea of metadata actually 
 *doing* something.

I've brought this up as well. Metadata is data about the data's
structure (or the data's container). Unfortunately, the CS API spec
uses metadata to refer to free-form *properties* or *attributes* of
the instance, and the decision was made to stick to what the API spec
said to avoid confusion (didn't exactly work out that way, I know ;) )

 I was under the (perhaps false) impression that metadata could be added by 
 end-users and was a way to describe associated data about an object which 
 didn't impact it's being. For example, we don't set the image type with 
 metadata, instances are created by providing an image type. Perhaps the two 
 aren't analogous because if openstack:near changes the instance would 
 migrate to another location? Or if volume-01 was moved, does the 
 instance move too?

The things that have so far been described as metadata are not
metadata at all, nor are they metacontent (see below). They are
merely attributes of the end-user request to allocate some resource.

-jay

for those of a like-minded curiosity about these things. From the
wikipedia article on this same subject:

The term Metadata is an ambiguous term which is used for two
fundamentally different concepts (Types). Although a trite expression
data about data is often used, it does not apply to both in the same
way. Structural metadata, the design and specification of data
structures, cannot be about data, because at design time the
application contains no data. In this case the correct description
would be data about the containers of data. Descriptive metadata on
the other hand, is about individual instances of application data, the
data content. In this case, a useful description (resulting in a
disambiguating neologism) would be data about data contents or
content about content thus Metacontent. Descriptive, Guide and the
NISO concept of Administrative metadata are all subtypes of
metacontent.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] server affinity

2011-02-28 Thread Jay Pipes
On Mon, Feb 28, 2011 at 10:45 PM, Jay Pipes jaypi...@gmail.com wrote:
 for those of a like-minded curiosity about these things. From the
 wikipedia article on this same subject:

 The term Metadata is an ambiguous term which is used for two
 fundamentally different concepts (Types). Although a trite expression
 data about data is often used, it does not apply to both in the same
 way. Structural metadata, the design and specification of data
 structures, cannot be about data, because at design time the
 application contains no data. In this case the correct description
 would be data about the containers of data. Descriptive metadata on
 the other hand, is about individual instances of application data, the
 data content. In this case, a useful description (resulting in a
 disambiguating neologism) would be data about data contents or
 content about content thus Metacontent. Descriptive, Guide and the
 NISO concept of Administrative metadata are all subtypes of
 metacontent.

And for those wondering why the Glance project uses the term
metadata to describe data about the image, we, too, have a similar
terminology problem:

We delineate between the image *data*, which is the raw image file
itself, and image *metadata*, which is really data about the image
(like disk format, status, etc). To make matters worse, we have the
concept of image *properties* which are free-form key/value pairs
attached to the image.

So, neither Glance nor Nova uses the term *metadata* properly, for
what it's worth :)

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Devin Carlen
Erik,

Thanks for the clarification.  I'd just like to reiterate that official support 
for the EC2 API is something that needs to be handled in parallel, since we've 
committed to supporting it in the past.  


Best,


Devin

On Feb 28, 2011, at 7:53 PM, Erik Carlin wrote:

 Devin -
 
 In a decomposed service model, OS APIs are per service, so the routing is 
 straightforward.  For services that need to consume other services (e.g. The 
 compute service needs an IP from the network service), the queueing and 
 worker model remains the same, it's just that the network worker calls out to 
 the RESTful network service API (likely the admin API).
 
 For EC2 (and any other 3rd party API), the community is welcome to support 
 them, although I see them as secondary to the canonical OS APIs themselves.  
 Since the EC2 API combines a number of services, it is essentially a 
 composition API.  It probably makes sense to keep in nova (i.e. compute) but 
 you are right, it would need to call out to glance, block, and network in the 
 diablo timeframe.
 
 What was attached was intended simply to show the general approach, not be a 
 detailed diagram of the API flows.  Once we complete the gap analysis John 
 has requested, these connections should become more clear.
 
 Erik
 
 From: Devin Carlen devin.car...@gmail.com
 Date: Mon, 28 Feb 2011 17:44:03 -0800
 To: Erik Carlin erik.car...@rackspace.com
 Cc: John Purrier j...@openstack.org, openstack@lists.launchpad.net 
 openstack@lists.launchpad.net
 Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)
 
 Your diagram is deceptively simple because it makes no distinction about how 
 block API would be handled in the EC2 API, where compute and block operations 
 are very closely coupled.  In order for the diagram to convey the 
 requirements properly, it needs to show how compute/network/volume API 
 requests are routed by both the EC2 and OpenStack API.
 
 
 Devin
 
 
 On Feb 28, 2011, at 3:52 PM, Erik Carlin wrote:
 
 I was talking with Will Reese about this more.  If we are eventually going 
 to decompose into independent services with separate endpoints, he thought 
 we should do that now.  I like that idea better.  For cactus, we still have 
 a single nova service black box but we put multiple OpenStack API 
 endpoints on the front side, one for each future service.  In other words, 
 use separate endpoints instead of extensions in a single endpoint to expose 
 the current capabilities.  That way, it sets us on the right path and 
 consumers don't have to refactor to support between cactus and diable.  In 
 diablo, we decompose into separate services and the endpoints move with 
 them.  It's a bit hard to visualize so I put together the attached pdf.  I'm 
 assuming glance is a separate service and endpoint for cactus (still need to 
 figure out per my message below) and swift already is.
 
 Erik 
 
 From: Erik Carlin erik.car...@rackspace.com
 Date: Mon, 28 Feb 2011 17:07:22 -0600
 To: John Purrier j...@openstack.org, openstack@lists.launchpad.net
 Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)
 
 That all sounds good.  My only question is around images.  Is glance ready 
 to be an independent service (and thus have a separate API) in Cactus?
 
 Erik
 
 From: John Purrier j...@openstack.org
 Date: Mon, 28 Feb 2011 16:53:53 -0600
 To: Erik Carlin erik.car...@rackspace.com, openstack@lists.launchpad.net
 Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)
 
 Hi Erik, today we have compute, block/volume, and network all encompassed in 
 nova. Along with image and object storage these make the whole of OpenStack 
 today. The goal is to see where we are at wrt the OpenStack API 
 (compute/network/volume/image) and coverage of the underlying implementation 
 as well as what is available through the EC2 API today.
  
 I would propose that volume and network API’s be exposed not through the 
 core compute API, but as extensions. Once we create separate services and 
 factor network and volume services out of nova these API’s will form the 
 core API’s for these services. We may also need to up-version these service 
 API’s between Cactus and Diablo as they are currently under heavy discussion 
 and design.
  
 John
  
 From: Erik Carlin [mailto:erik.car...@rackspace.com] 
 Sent: Monday, February 28, 2011 3:16 PM
 To: John Purrier; openstack@lists.launchpad.net
 Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)
  
 John -
  
 Are we just talking about compute aspects?  IMO, we should NOT be exposing 
 block functionality in the OS compute API.  In Diablo, we will break out 
 block into a separate service with it's own OS block API.  That means for 
 now, there may be functionality in nova that isn't exposed (an artifact of 
 originally mimicing EC2) until we can fully decompose nova into independent 
 services.
  
 Erik   
  
 From: John Purrier j...@openstack.org
 Date: Mon, 28 Feb 2011 

Re: [Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Jay Pipes
On Mon, Feb 28, 2011 at 8:15 PM, Ewan Mellor ewan.mel...@eu.citrix.com wrote:
 If the “known_bugs” list isn’t being well received, how about this:

 # TODO(ewanm): Enable once bug #21212 is fixed

 if False:
     assert(something)

 And then put a comment on bug #21212 saying “please also enable the
 following unit tests when you fix this bug”.

Hmm, I think the above has just as much chance (or more!) of producing
stale unit tests.

My thinking on this issue remains the following, repeated from the bug in OP:

This is the way I deal with it:

When you run into the bug:
* If it is a bug ONLY in code that you have just added in your branch,
then just fix the bug
* If it is a bug that is reproducible in trunk (i.e. not *just* the
topic branch you're working in):
  (1) File the bug in Launchpad
  (2) Do not fix the bug in your topic branch
  (3) If you want to assign yourself to the bug you just filed, then:
a. Branch a bugfix branch from your local *trunk* branch (NOT your
topic branch)
b. Add a test case to your bugfix branch that will trigger the bug
c. Patch code to fix the bug and pass the test case
d. Push to LP with --fixes=lp: where XXX is the bug number
e. Propose for merging your bugfix branch into trunk

At this point, return to working on your original topic branch and
continue coding. If you *must* have the fix for the bug you just
reported, then do:

bzr shelve --all
bzr merge ../bugXXX  bzr commit -m Merge fix for XXX
bzr unshelve 1

If you don't absolutely have to have the fix for bug XXX in your topic
branch, don't merge it, since there's really no need to...

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Erik Carlin
Thanks Devin for the reiteration.  I'm for EC2 API support, I just think that 
OS owning our own API specs is key if we are to innovate and drive open, 
standard per service interfaces.

Erik

From: Devin Carlen devin.car...@gmail.commailto:devin.car...@gmail.com
Date: Mon, 28 Feb 2011 19:59:38 -0800
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Cc: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

Erik,

Thanks for the clarification.  I'd just like to reiterate that official support 
for the EC2 API is something that needs to be handled in parallel, since we've 
committed to supporting it in the past.


Best,


Devin

On Feb 28, 2011, at 7:53 PM, Erik Carlin wrote:

Devin -

In a decomposed service model, OS APIs are per service, so the routing is 
straightforward.  For services that need to consume other services (e.g. The 
compute service needs an IP from the network service), the queueing and worker 
model remains the same, it's just that the network worker calls out to the 
RESTful network service API (likely the admin API).

For EC2 (and any other 3rd party API), the community is welcome to support 
them, although I see them as secondary to the canonical OS APIs themselves.  
Since the EC2 API combines a number of services, it is essentially a 
composition API.  It probably makes sense to keep in nova (i.e. compute) but 
you are right, it would need to call out to glance, block, and network in the 
diablo timeframe.

What was attached was intended simply to show the general approach, not be a 
detailed diagram of the API flows.  Once we complete the gap analysis John has 
requested, these connections should become more clear.

Erik

From: Devin Carlen devin.car...@gmail.commailto:devin.car...@gmail.com
Date: Mon, 28 Feb 2011 17:44:03 -0800
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Cc: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

Your diagram is deceptively simple because it makes no distinction about how 
block API would be handled in the EC2 API, where compute and block operations 
are very closely coupled.  In order for the diagram to convey the requirements 
properly, it needs to show how compute/network/volume API requests are routed 
by both the EC2 and OpenStack API.


Devin


On Feb 28, 2011, at 3:52 PM, Erik Carlin wrote:

I was talking with Will Reese about this more.  If we are eventually going to 
decompose into independent services with separate endpoints, he thought we 
should do that now.  I like that idea better.  For cactus, we still have a 
single nova service black box but we put multiple OpenStack API endpoints on 
the front side, one for each future service.  In other words, use separate 
endpoints instead of extensions in a single endpoint to expose the current 
capabilities.  That way, it sets us on the right path and consumers don't have 
to refactor to support between cactus and diable.  In diablo, we decompose into 
separate services and the endpoints move with them.  It's a bit hard to 
visualize so I put together the attached pdf.  I'm assuming glance is a 
separate service and endpoint for cactus (still need to figure out per my 
message below) and swift already is.

Erik

From: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Date: Mon, 28 Feb 2011 17:07:22 -0600
To: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

That all sounds good.  My only question is around images.  Is glance ready to 
be an independent service (and thus have a separate API) in Cactus?

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)

Hi Erik, today we have compute, block/volume, and network all encompassed in 
nova. Along with image and object storage these make the whole of OpenStack 
today. The goal is to see where we are at wrt the OpenStack API 
(compute/network/volume/image) and coverage of the underlying implementation as 
well as what is available through the EC2 API today.

I would propose that volume and network API’s be exposed not through the core 
compute API, but as extensions. Once we create separate services and factor 
network and volume services out of nova these API’s will form the core API’s 
for these services. We may also need to 

Re: [Openstack] server affinity

2011-02-28 Thread Mark Washenberger
This is great stuff. It sounds like there is a real distinction to be made 
between the data central to the apis and the user-defined properties. Also, as 
time and compatibility allow, we should probably change what we were calling 
metadata to be called properties or somesuch.

Jay Pipes jaypi...@gmail.com said:

 On Mon, Feb 28, 2011 at 10:45 PM, Jay Pipes jaypi...@gmail.com wrote:
 for those of a like-minded curiosity about these things. From the
 wikipedia article on this same subject:

 The term Metadata is an ambiguous term which is used for two
 fundamentally different concepts (Types). Although a trite expression
 data about data is often used, it does not apply to both in the same
 way. Structural metadata, the design and specification of data
 structures, cannot be about data, because at design time the
 application contains no data. In this case the correct description
 would be data about the containers of data. Descriptive metadata on
 the other hand, is about individual instances of application data, the
 data content. In this case, a useful description (resulting in a
 disambiguating neologism) would be data about data contents or
 content about content thus Metacontent. Descriptive, Guide and the
 NISO concept of Administrative metadata are all subtypes of
 metacontent.
 
 And for those wondering why the Glance project uses the term
 metadata to describe data about the image, we, too, have a similar
 terminology problem:
 
 We delineate between the image *data*, which is the raw image file
 itself, and image *metadata*, which is really data about the image
 (like disk format, status, etc). To make matters worse, we have the
 concept of image *properties* which are free-form key/value pairs
 attached to the image.
 
 So, neither Glance nor Nova uses the term *metadata* properly, for
 what it's worth :)
 
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Jesse Andrews
I'm also confused because nova (compute/block/network) is in 1 repository 
doesn't mean it isn't 3 different services.

We've talked about moving the services inside nova to not reaching inside of 
each other via RPC calls and instead making HTTP calls.  But they are mostly 
already designed in a way that allows them to operate independantly.

And I would also say that while rackspace may deploy with 3 endpoints, 
openstack might want to deploy multiple services behind a single endpoint.

Jesse

On Feb 28, 2011, at 3:52 PM, Erik Carlin wrote:

 I was talking with Will Reese about this more.  If we are eventually going to 
 decompose into independent services with separate endpoints, he thought we 
 should do that now.  I like that idea better.  For cactus, we still have a 
 single nova service black box but we put multiple OpenStack API endpoints 
 on the front side, one for each future service.  In other words, use separate 
 endpoints instead of extensions in a single endpoint to expose the current 
 capabilities.  That way, it sets us on the right path and consumers don't 
 have to refactor to support between cactus and diable.  In diablo, we 
 decompose into separate services and the endpoints move with them.  It's a 
 bit hard to visualize so I put together the attached pdf.  I'm assuming 
 glance is a separate service and endpoint for cactus (still need to figure 
 out per my message below) and swift already is.
 
 Erik 
 
 From: Erik Carlin erik.car...@rackspace.com
 Date: Mon, 28 Feb 2011 17:07:22 -0600
 To: John Purrier j...@openstack.org, openstack@lists.launchpad.net
 Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)
 
 That all sounds good.  My only question is around images.  Is glance ready to 
 be an independent service (and thus have a separate API) in Cactus?
 
 Erik
 
 From: John Purrier j...@openstack.org
 Date: Mon, 28 Feb 2011 16:53:53 -0600
 To: Erik Carlin erik.car...@rackspace.com, openstack@lists.launchpad.net
 Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)
 
 Hi Erik, today we have compute, block/volume, and network all encompassed in 
 nova. Along with image and object storage these make the whole of OpenStack 
 today. The goal is to see where we are at wrt the OpenStack API 
 (compute/network/volume/image) and coverage of the underlying implementation 
 as well as what is available through the EC2 API today.
  
 I would propose that volume and network API’s be exposed not through the core 
 compute API, but as extensions. Once we create separate services and factor 
 network and volume services out of nova these API’s will form the core API’s 
 for these services. We may also need to up-version these service API’s 
 between Cactus and Diablo as they are currently under heavy discussion and 
 design.
  
 John
  
 From: Erik Carlin [mailto:erik.car...@rackspace.com] 
 Sent: Monday, February 28, 2011 3:16 PM
 To: John Purrier; openstack@lists.launchpad.net
 Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)
  
 John -
  
 Are we just talking about compute aspects?  IMO, we should NOT be exposing 
 block functionality in the OS compute API.  In Diablo, we will break out 
 block into a separate service with it's own OS block API.  That means for 
 now, there may be functionality in nova that isn't exposed (an artifact of 
 originally mimicing EC2) until we can fully decompose nova into independent 
 services.
  
 Erik   
  
 From: John Purrier j...@openstack.org
 Date: Mon, 28 Feb 2011 14:16:20 -0600
 To: openstack@lists.launchpad.net
 Subject: [Openstack] OpenStack Compute API for Cactus (critical!)
  
 Has anyone done a gap analysis against the proposed OpenStack Compute API and 
 a) the implemented code, and b) the EC2 API?
  
 It looks like we have had a breakdown in process, as the community review 
 process of the proposed spec has not generated discussion of the missing 
 aspects of the proposed spec.
  
 Here is what we said on Feb 3 as the goal for Cactus:
  
 OpenStack Compute API completed. We need to complete a working set of API's 
 that are consistent and inclusive of all the exposed functionality.
  
 We need to *very* quickly identify the missing elements that are required in 
 the OpenStack Compute API, and then discuss how we mobilize to get this work 
 done for Cactus. As this is the #1 priority for this release there are 
 implications on milestones dates depending on the results of this exercise. 
 The 1.1 spec should be complete and expose all current Nova functionality 
 (superset of EC2/RS).
  
 Dendrobates, please take the lead on this, anyone who can help please 
 coordinate with Rick. Can we get a fairly complete view by EOD tomorrow? 
 Please set up a wiki page to identify the gaps, I suggest 3 columns (Actual 
 code / EC2 / OpenStack Compute).
  
 Thanks,
  
 John
 ___ Mailing list: 
 https://launchpad.net/~openstack Post to : 

Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Paul Voccio
Jesse,

I agree that some implementations can want to have a single endpoint. I think 
this is doable with a simple proxy that can pass requests back to each service 
apis. This can also be accomplished by having configuration variables in your 
bindings to talk to something that looks like the following:

compute=api.compute.example.com
volume=api.volume.example.com
image=api.image.example.com
network=api.network.example.com

Or for behind the proxies:

compute=api.example.com
volume=api.example.com
image=api.example.com
network=api.example.com

Maybe this is something the auth services return?


From: Jesse Andrews anotherje...@gmail.commailto:anotherje...@gmail.com
Date: Mon, 28 Feb 2011 19:53:01 -0800
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

I'm also confused because nova (compute/block/network) is in 1 repository 
doesn't mean it isn't 3 different services.

We've talked about moving the services inside nova to not reaching inside of 
each other via RPC calls and instead making HTTP calls.  But they are mostly 
already designed in a way that allows them to operate independantly.

And I would also say that while rackspace may deploy with 3 endpoints, 
openstack might want to deploy multiple services behind a single endpoint.

Jesse

On Feb 28, 2011, at 3:52 PM, Erik Carlin wrote:

I was talking with Will Reese about this more.  If we are eventually going to 
decompose into independent services with separate endpoints, he thought we 
should do that now.  I like that idea better.  For cactus, we still have a 
single nova service black box but we put multiple OpenStack API endpoints on 
the front side, one for each future service.  In other words, use separate 
endpoints instead of extensions in a single endpoint to expose the current 
capabilities.  That way, it sets us on the right path and consumers don't have 
to refactor to support between cactus and diable.  In diablo, we decompose into 
separate services and the endpoints move with them.  It's a bit hard to 
visualize so I put together the attached pdf.  I'm assuming glance is a 
separate service and endpoint for cactus (still need to figure out per my 
message below) and swift already is.

Erik

From: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Date: Mon, 28 Feb 2011 17:07:22 -0600
To: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

That all sounds good.  My only question is around images.  Is glance ready to 
be an independent service (and thus have a separate API) in Cactus?

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)

Hi Erik, today we have compute, block/volume, and network all encompassed in 
nova. Along with image and object storage these make the whole of OpenStack 
today. The goal is to see where we are at wrt the OpenStack API 
(compute/network/volume/image) and coverage of the underlying implementation as 
well as what is available through the EC2 API today.

I would propose that volume and network API’s be exposed not through the core 
compute API, but as extensions. Once we create separate services and factor 
network and volume services out of nova these API’s will form the core API’s 
for these services. We may also need to up-version these service API’s between 
Cactus and Diablo as they are currently under heavy discussion and design.

John

From: Erik Carlin [mailto:erik.car...@rackspace.com]
Sent: Monday, February 28, 2011 3:16 PM
To: John Purrier; 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

John -

Are we just talking about compute aspects?  IMO, we should NOT be exposing 
block functionality in the OS compute API.  In Diablo, we will break out block 
into a separate service with it's own OS block API.  That means for now, there 
may be functionality in nova that isn't exposed (an artifact of originally 
mimicing EC2) until we can fully decompose nova into independent services.

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 14:16:20 -0600
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] OpenStack Compute API for Cactus (critical!)

Has anyone done a gap analysis against the proposed OpenStack Compute API and 
a) the implemented code, and b) the EC2 API?

It looks like we 

Re: [Openstack-xenapi] Requesting input on vm-params for Nova

2011-02-28 Thread Cory Wright
Hi Ewan,

 This would then allow you to configure which OSes have NX turned on, etc.  Is 
 this something that you want to tackle, Cory?

I'm already working on a way to do this, which is what prompted me to
ask for input on the various vm-params.  :)

Thanks for the feedback.  I've consolidated our vm-params and removed
ones that are already defaults.

Cory

--
Cory Wright
Software Developer
cory.wri...@rackspace.com




On Thu, Feb 17, 2011 at 5:53 PM, Ewan Mellor ewan.mel...@eu.citrix.com wrote:
 -Original Message-
 From: Cory Wright
 Sent: 17 February 2011 20:41

 [Snip]

 Below are the vm-params from Nova that are different from what we
 (Rackspace) currently use:

 Nova:
   Linux:
     platform: {}

 The platform flags (acpi, apic, etc) are specific to HVM, and they have no 
 effect for PV domains, so this is fine.

     PV_args: 'noninteractive'

 I think this just applies to our pre-manufactured Debian Etch VM, and really 
 shouldn't be here.  Salvatore, you added this as far as I can tell -- what 
 was the thinking here?

   Windows:
     platform: {'acpi': 'true', 'apic': 'true',
                'pae': 'true', 'viridian': 'true'}

 These are good enough, but ideally we'd set these on a per-OS basis.  
 XenServer sets these using the appropriate template to match the OS that's 
 going to be installed, so you can see what differences there might be by 
 listing our templates and getting the recommended set of flags.  Most of the 
 time, this set is fine, but you might see performance degradation on older 
 versions of Windows if you don't match what we do in the appropriate 
 XenServer template.

 We currently don't have anywhere to put these flags, so we end up with this 
 hardcoded list.  It would be good to improve this.  We could figure out a 
 scheme whereby some metadata in Glance (e.g. an OS hint) turned into the 
 appropriate HVM platform flags.  This would then allow you to configure which 
 OSes have NX turned on, etc.  Is this something that you want to tackle, Cory?

   Both:
     other_config: {}
     user_version: '0'

 These are the defaults, so it makes no difference specifying them.

 Also, we supply the following vm-params to all instances when booting
 but they are not currently used in Nova.

   ha_always_run: true,

 This says, if the VM shuts down, for whatever reason, then immediately start 
 it again.  I presume this isn't what you want, but I presume that it isn't 
 taking effect because you don't have HA turned on.  If my assumptions are 
 correct, then I'd set this to false, for clarity.

   blocked_operations: {}
   ha_restart_priority: 
   tags: []
   xenstore_data: {},

 All four are the default, so no effect.

 Cheers,

 Ewan.



___
Mailing list: https://launchpad.net/~openstack-xenapi
Post to : openstack-xenapi@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-xenapi
More help   : https://help.launchpad.net/ListHelp