I guess you might need to port one of the other iSCSI-based drivers (e.g.
lefthand) to use whatever creation/deletion/access control mechanisms your Dell
SAN uses... This does not look to be a significant amount of work, but such
commands aren't generally standardized so would need to be done
Jay Pipes on 16 July 2012 18:31 wrote:
On 07/16/2012 09:55 AM, David Kranz wrote:
Sure, although in this *particular* case the Cinder project is a
bit-for-bit copy of nova-volumes. In fact, the only thing really of
cause for concern are:
* Providing a migration script for the database
We've got volumes in production, and while I'd be more comfortable with option
2 for the reasons you list below, plus the fact that cinder is fundamentally
new code with totally new HA and reliability work needing to be done
(particularly for the API endpoint), it sounds like the majority is
John Griffith on 20 June 2012 18:26 wrote:
On Wed, Jun 20, 2012 at 10:53 AM, Nick Barcet
nick.bar...@canonical.com wrote:
What we want is to retrieve the maximum amount of data, so we can
meter
things, to bill them in the end. For now and for Cinder, this would
first include (per
nova-manage volume delete on a nova host works for this, though if the attach
operation is still underway then this might cause some weirdness. If the attach
cast to nova compute has timeouted out or been lost / errored then nova-manage
volume delete should do the job.
--
Duncan Thomas
HP
nova list and other nova * commands work by making http (or https)
connections to your api node. Any node that has network access to it can make
calls just fine as long as you've got the right environment variables set,
which include: NOVA_URL, NOVA_PROJECT_ID, NOVA_USERNAME etc.
nova-manage
The weakness of all of our current async calls (e.g. nova boot) is that there
is no route to get the details of what failed... when my sever comes up in
'error', I'd really like to know why... is it a system error? Broken image?
Temporary glitch? There doesn't seem to be a channel for this kind
Mark Nottingham on 01 February 2012 05:19 wrote:
On 31/01/2012, at 2:48 PM, andi abes wrote:
The current semantics allow you to do
1) the the most recent cached copy, using the http caching mechanism.
This will ignore any updates to the swift cluster, as long as the cache
is not stale
Initial thought up at http://wiki.openstack.org/VolumeAffinity
--
Duncan Thomas
HP Cloud Services, Galway
From: openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.net
[mailto:openstack-volume-bounces+duncan.thomas=hp@lists.launchpad.net] On
Behalf Of Thomas, Duncan
Sent: 25
9 matches
Mail list logo