Re: [Openstack] Nova Volume and provisionning on iSCSI SAN

2012-08-02 Thread Thomas, Duncan
I guess you might need to port one of the other iSCSI-based drivers (e.g. 
lefthand) to use whatever creation/deletion/access control mechanisms your Dell 
SAN uses... This does not look to be a significant amount of work, but such 
commands aren't generally standardized so would need to be done for your 
specific SAN.

--
Duncan Thomas
HP Cloud Services, Galway

From: openstack-bounces+duncan.thomas=hp@lists.launchpad.net 
[mailto:openstack-bounces+duncan.thomas=hp@lists.launchpad.net] On Behalf 
Of Bilel Msekni
Sent: 02 August 2012 10:32
To: openstack@lists.launchpad.net
Subject: [Openstack] Nova Volume and provisionning on iSCSI SAN

Hi all,
I have a question relating to nova-volume, and provisioning block devices as 
storage for VMs. As I understand it from the documentation, nova-volume will 
take a block device with LVM on it, and then become an iSCSI target to share 
the logical volumes to compute nodes. I also understand that there is another 
process for using an HP lefthand SAN or solaris iSCSI setup, whereby 
nova-volume can interact with APIs for volume creation on the SAN itself.

I have a dell iSCSI SAN, and I can see that I'd be able to mount a LUN from the 
SAN on my nova-volume node, then go through the documented process of creating 
an LVM on this LUN and having nova-volume re-share it over iSCSI to the compute 
nodes, but what I'm wondering is whether I can have the compute nodes simple 
connect to the iSCSI SAN to access these volumes (which would be created and 
managed by nova-volume still), rather than connect each compute node to the 
iSCSI target which nova-volume presents? I imagine with this setup, I could 
take advantage of the SAN's HA and performance benefits.

Hope that makes sense..
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-17 Thread Thomas, Duncan
Jay Pipes on 16 July 2012 18:31 wrote:
 
 On 07/16/2012 09:55 AM, David Kranz wrote:


 Sure, although in this *particular* case the Cinder project is a
 bit-for-bit copy of nova-volumes. In fact, the only thing really of
 cause for concern are:
 
 * Providing a migration script for the database tables currently in the
 Nova database to the Cinder database
 * Ensuring that Keystone's service catalog exposes the volume endpoint
 along with the compute endpoint so that volume API calls are routed to
 the right endpoint (and there's nothing preventing a simple URL rewrite
 redirect for the existing /volumes calls in the Compute API to be
 routed
 directly to the new Volumes endpoint which has the same API

Plus stand up a new rabbit HA server. 
Plus stand up a new HA database server. 
Plus understand the new availability constraints of the nova-cinder interface 
point
Plus whatever else I haven't scoped yet

And there are bug fixes and correctness fixes slowly going into Cinder, so it 
is not a bit--for-bit copy any longer...

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Thomas, Duncan
We've got volumes in production, and while I'd be more comfortable with option 
2 for the reasons you list below, plus the fact that cinder is fundamentally 
new code with totally new HA and reliability work needing to be done 
(particularly for the API endpoint), it sounds like the majority is strongly 
favouring option 1...

--
Duncan Thomas
HP Cloud Services, Galway

From: openstack-bounces+duncan.thomas=hp@lists.launchpad.net 
[mailto:openstack-bounces+duncan.thomas=hp@lists.launchpad.net] On Behalf 
Of Flavia Missi
Sent: 11 July 2012 20:56
To: Renuka Apte
Cc: Openstack (openstack@lists.launchpad.net) (openstack@lists.launchpad.net)
Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

For me it's +1 to 1, but...

Here at Globo.com we're already deploying clouds based on openstack (not in 
production yet, we have dev and lab), and it's really painful when openstack 
just forces us to change, I mean, sysadmins are not that happy, so I think 
it's more polite if we warn them in Folsom, and remove everything next. Maybe 
this way nobody's going to fear the update. It also make us lose the chain of 
thought.. you're learning, and suddenly you have to change something for an 
update, and then you come back to what you we're doing...
--
Flavia

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Cinder usage data retrieval

2012-06-21 Thread Thomas, Duncan
John Griffith on 20 June 2012 18:26 wrote:


 On Wed, Jun 20, 2012 at 10:53 AM, Nick Barcet
 nick.bar...@canonical.com wrote:
  What we want is to retrieve the maximum amount of data, so we can
 meter
  things, to bill them in the end. For now and for Cinder, this would
  first include (per user/tenant):
  - the amount of reserved volume space
  - the amount of used volume space
  - the number of volumes
  but we'll need probably more in a near future.


 We should chat about how things are shaping up so far and how you're
 implementing things on the other sides (consistency where
 practical/possible).  Also, it sort of depends on the architecture and
 use model details of Ceilometer, which I hate to admit but I'm not
 really up to speed on.
 
 My first reaction/thought is the best most appropriate place to tie in
 is via the python-cinderclient.  There would be a number of ways to
 obtain some of this info, whether deriving it or maybe some extensions
 to obtain things directly.

One thing to watch for here is access control... Don't want one tenant 
able to find out about another's usage. Probably not important on a private
cloud deployment, but certainly important in the public cloud space. Having
a separate endpoint to do this kind of admin stuff over also means you can
have much tighter IP level access controls...

-- 
Duncan Thomas
HP Cloud Services, Galway


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Deleting a volume stuck in attaching state?

2012-06-20 Thread Thomas, Duncan
nova-manage volume delete on a nova host works for this, though if the attach 
operation is still underway then this might cause some weirdness. If the attach 
cast to nova compute has timeouted out or been lost / errored then nova-manage 
volume delete should do the job.

-- 
Duncan Thomas
HP Cloud Services, Galway


 -Original Message-
 From: openstack-bounces+duncan.thomas=hp@lists.launchpad.net
 [mailto:openstack-bounces+duncan.thomas=hp@lists.launchpad.net] On
 Behalf Of John Griffith
 Sent: 20 June 2012 05:02
 To: Lars Kellogg-Stedman
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Deleting a volume stuck in attaching state?
 
 On Tue, Jun 19, 2012 at 7:40 PM, Lars Kellogg-Stedman
 l...@seas.harvard.edu wrote:
  I attempted to attach a volume to a running instance, but later
  deleted the instance, leaving the volume stuck in the attaching
  state:
 
   # nova volume-list
   ++---+--+--+-+-+
   | ID |   Status  | Display Name | Size | Volume Type | Attached to |
   ++---+--+--+-+-+
   | 9  | attaching | None         | 1    | None        |             |
   ++---+--+--+-+-+
 
  It doesn't appear to be possible to delete this with nova
  volume-delete:
 
   # nova volume-delete
    nova volume-delete 9
    ERROR: Invalid volume: Volume status must be available or error
 (HTTP 400)
 
  Other than directly editing the database (and I've had to do that an
  awful lot already), how do I recover from this situation?
 
  --
  Lars Kellogg-Stedman l...@seas.harvard.edu       |
  Senior Technologist                                |
 http://ac.seas.harvard.edu/
  Academic Computing                                 |
 http://code.seas.harvard.edu/
  Harvard School of Engineering and Applied Sciences |
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to     : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 Hi Lars,
 
 Unfortunately manipulating the database might be your best bet for
 now.  We do have plans to come up with another option in the Cinder
 project, but unfortunately that won't help you much right now.
 
 If somebody has a better method, I'm sure they'll speak up and reply
 to this email, but I think right now that's your best bet.
 
 Thanks,
 John
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Invoking Nova commands remotely

2012-06-08 Thread Thomas, Duncan
nova list and other nova * commands work by making http (or https) 
connections to your api node. Any node that has network access to it can make 
calls just fine as long as you've got the right environment variables set, 
which include: NOVA_URL, NOVA_PROJECT_ID, NOVA_USERNAME etc.

nova-manage is a bit more deeply imbedded into the nova internal 
architecture. Setting up a novarc with ampq details and database connection 
details will probably allow it to work, but I wouldn't be surprised ifyou ran 
into some weird behavior...

--
Duncan Thomas
HP Cloud Services, Galway

From: openstack-bounces+duncan.thomas=hp@lists.launchpad.net 
[mailto:openstack-bounces+duncan.thomas=hp@lists.launchpad.net] On Behalf 
Of Neelakantam Gaddam
Sent: 08 June 2012 06:32
To: openstack@lists.launchpad.net
Subject: [Openstack] Invoking Nova commands remotely

Hi All,

I have a setup with one openstack controller with multiple compute node. In a 
separate node, say monitor node, I want to invoke the nova commands on 
openstack controller. Do we need to run any specific components of openstack on 
this node to achieve this ?

Is there any way to invoke the nova commands such as nova-manage, nova list.. 
etc remotely?

Thanks in advance.


--
Thanks  Regards
Neelakantam Gaddam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova][glance] making nova api more asynchronous

2012-06-08 Thread Thomas, Duncan
The weakness of all of our current async calls (e.g. nova boot) is that there 
is no route to get the details of what failed... when my sever comes up in 
'error', I'd really like to know why... is it a system error? Broken image? 
Temporary glitch? There doesn't seem to be a channel for this kind of 
information...

-- 
Duncan Thomas
HP Cloud Services, Galway


 -Original Message-
 From: openstack-bounces+duncan.thomas=hp@lists.launchpad.net
 [mailto:openstack-bounces+duncan.thomas=hp@lists.launchpad.net] On
 Behalf Of Gabe Westmaas
 Sent: 08 June 2012 06:01
 To: openstack@lists.launchpad.net
 Subject: [Openstack] [nova][glance] making nova api more asynchronous
 
 Hey all,
 
 I was looking through some of the api calls, in particular creating a
 server on the openstack api.  I'm not too excited by how long it takes,
 and was wondering what people think about making it more asynchronous.
 Basically, wondering if making it so that the POST to the nova API
 doesn't
 actually check to make sure we have a valid image, but instead just
 assume
 the user knew what they were doing and pass the image down is something
 that people buy.  If the image is invalid (doesn't exist, no permission
 to
 use that image, flavor size doesn't match the minimum requirements of
 the
 image) then we would set the server to error and be done.
 
 There is an obvious drawback: losing the ability to fail fast.  I think
 we
 need to look at embracing the overall asynchronous nature of the API as
 much as possible, and rely on client side tools to do upfront
 validation
 where appropriate - nova client would already prevent this from
 happening
 on a nonexistent image, for example.
 
 The API spec doesn't specify anything one way or the other about this -
 so
 another route is to make this a configuration option.
 
 By the way, I don't want this to be construed in any way, shape or form
 as
 me not wanting to improve the performance of glance.  Of course we want
 that, but we also know that any additional latency hurts performance.
 
 Thoughts? Are the benefits of fail fast out-weighing our performance
 needs?
 
 
 Gabe
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Consistency Guarantees?

2012-02-01 Thread Thomas, Duncan
Mark Nottingham on 01 February 2012 05:19 wrote:
 On 31/01/2012, at 2:48 PM, andi abes wrote:
 
  The current semantics allow you to do
 
  1) the the most recent cached copy, using the http caching mechanism.
 This will ignore any updates to the swift cluster, as long as the cache
 is not stale
 
  2) get a recent copy from swift  (when setting no cache)
 
  3) do a quorum call on all the storage nodes to get the most accurate
 answer swift can provide.
 
 
  You're proposing that 2  3 are the same, since they're both
 different than 1. But their performance implications on 2  3 are quite
 different.
 
 Effectively. My point, however, is that inventing new mechanisms --
 especially new headers -- should be avoided if possible, as they
 generally cause more trouble than they're worth.
 
 Is there really a use case for #2 being distinct from #3?
 
 If there is, it'd be better expressed as a new Cache-Control request
 directive (e.g., Cache-Control: authoritative), next time things get
 revised.

It isn't a caching directive though, it's asking for a change of behavior on 
the part of the swift server...

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-volume] adding auxiliary specs to volume create

2011-10-27 Thread Thomas, Duncan
Initial thought up at http://wiki.openstack.org/VolumeAffinity
--
Duncan Thomas
HP Cloud Services, Galway

From: openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.net 
[mailto:openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.net] On 
Behalf Of Thomas, Duncan
Sent: 25 October 2011 17:47
To: Vladimir Popovski; openstack-volume@lists.launchpad.net
Subject: Re: [Openstack-volume] adding auxiliary specs to volume create

Since affinity can be expressed as key-value pairs, that works fine for me too.

In our case no scheduler decision is needed, any volume diver instance will do, 
but I guess for non-fully-connected storage then it becomes a scheduling 
decision, so the interface should be something that the scheduler can make a 
decision on. I guess I need to go look at the scheduler code carefully and 
figure out how we might do the interactions.

I'll have a read of your updated blueprint and comment. It seems to me that any 
setup where a generic scheduler cannot make sensible decisions is not a great 
design, but I am not that militant on the subject.

Regards

--
Duncan Thomas
HP Cloud Services, Galway

From: Vladimir Popovski [mailto:vladi...@zadarastorage.com]
Sent: 25 October 2011 17:31
To: Thomas, Duncan; openstack-volume@lists.launchpad.net
Subject: RE: [Openstack-volume] adding auxiliary specs to volume create

Hi Duncan,

I was thinking of using it as a requirements field for volume scheduling. In 
this case it will be not a free-text, but dictionary of key/value pairs. Unlike 
regular volume metadata field where user/app could put anything, this one will 
be forwarded to scheduler and treated as a special set of requirements.

However, I had a conversation with Clay yesterday and I agree that we could 
reuse metadata fields for that (this is pretty much what we are doing in our 
VSA scheduler). It means that vendor will need to create its own sub-scheduler 
who will know what fields it should look for in the meta-data.

Regards,
-Vladimir


From: Thomas, Duncan [mailto:duncan.tho...@hp.commailto:duncan.tho...@hp.com]
Sent: Tuesday, October 25, 2011 9:21 AM
To: Vladimir Popovski; 
openstack-volume@lists.launchpad.netmailto:openstack-volume@lists.launchpad.net
Subject: RE: [Openstack-volume] adding auxiliary specs to volume create

Hi

I'm still working on the blueprint for affinity, but an optional free-text 
field that gets passed through to the scheduler and driver should be fine from 
a user api point-of-view.

Regards

--
Duncan Thomas
HP Cloud Services, Galway

From: 
openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.netmailto:openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.net
 
[mailto:openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.net]mailto:[mailto:openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.net]
 On Behalf Of Vladimir Popovski
Sent: 24 October 2011 20:02
To: 
openstack-volume@lists.launchpad.netmailto:openstack-volume@lists.launchpad.net
Subject: [Openstack-volume] adding auxiliary specs to volume create

Team,

I would like to propose a change to volume-type aware scheduler we are 
developing. After last meeting it was unclear how we will treat volume affinity 
or other types of per-volume specifications.
What I think we could do is to add an additional parameter (optional) to create 
volume API that will hold auxiliary specs for the volume. This parameter will 
be forwarded to scheduler as one of arguments. The generic scheduler will 
combine key/value pairs from extra specs together with these auxiliary specs 
and will perform a search based on combined list.

Pls let me know what do you think about it? I've already update a BP spec with 
this proposal.

Regards,
-Vladimir
-- 
Mailing list: https://launchpad.net/~openstack-volume
Post to : openstack-volume@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-volume
More help   : https://help.launchpad.net/ListHelp