Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Caitlin Bestler
John Griffith wrote:

>  Yes, I'm really agree with Diego.
>  It would be a good choice for submitting a blueprint with this storage 
> feature based on tenants.

I think the key is that the File/Object service should be enabled similarly to 
how volumes are enabled,
With similar tenant scoping and granularity.

So a NFS export would be enabled for a VM much the way a volume is, with the 
only difference being
that a NFS export *can* be shared. But when it is not shared, it should be just 
as eligible for local storage
as a cinder volume is.

To the extent that this is not just a "migrate-to-local-storage" feature, it 
needs to be integrated with Quantum
as well. The network needs to be configured so that *only* this set of clients 
has access to the virtual network
where the specified exports are enabled.

This does a lot to solve the multi-tenant problem as well. Each export can be 
governed by a single tenant.
If the NAS traffic is all on different virtual networks there are never any 
conflicts over UIDs and GIDs.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Diego Parrilla Santamaría
Hi John,

Yes, that's a really good solution. It is exactly what the StackOps
Enterprise Edition offers out of the box.  It's a simpler alternative
assuming you are big enough to have several clusters of compute nodes, and
each cluster with different quality of service preassigned. And it works...
if the scheduler function works.

My proposal about a hierarchy of folders for shared storage comes from
requirements of some customers that want to be able to control the IO on a
tenant basis, and want to use very cheap scalable shared storage.

Let's say that StackOps EE follows now a static approach, and we would like
to have a dynamic one ;-)


Cheers
Diego
 --
Diego Parrilla
*CEO*
*www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
skype:diegoparrilla*
* 
*

*




On Thu, Dec 20, 2012 at 6:37 PM, John Griffith
wrote:

>
>
> On Thu, Dec 20, 2012 at 9:37 AM, JuanFra Rodriguez Cardoso <
> juanfra.rodriguez.card...@gmail.com> wrote:
>
>> Yes, I'm really agree with Diego.
>> It would be a good choice for submitting a blueprint with this storage
>> feature based on tenants.
>>
>> According to current quotas control, it limits the:
>>
>>-
>>
>>Number of volumes which may be created
>>-
>>
>>Total size of all volumes within a project as measured in GB
>>-
>>
>>Number of instances which may be launched
>>-
>>
>>Number of processor cores which may be allocated
>>- Publicly accessible IP addresses
>>
>>
>> Another new feature related to shared storage we had thought about, it's
>> to include an option for choosing if an instance has to be replicated or
>> not, i.e. in a MooseFS scenario, to indicate goal (number of replicas).
>> It's useful for example in testing or demo projects, where HA is not
>> required.
>>
>> Regards,
>>
>> JuanFra.
>>
>> 2012/12/20 Diego Parrilla Santamaría > >
>>
>>> mmm... not sure if the concept of oVirt multiple storage domains is
>>> something that can be implemented in Nova as it is, but I would like to
>>> share my thoughts because it's something that -from my point of view-
>>> matters.
>>>
>>> If you want to change the folder where the nova instances are stored you
>>> have to modify the option in nova-compute.conf  'instances_path':
>>>
>>> If you look at that folder (/var/lib/nova/instances/ by default) you
>>> will see a structure like this:
>>>
>>> drwxrwxr-x   2 nova nova   73 Dec  4 12:16 _base
>>> drwxrwxr-x   2 nova nova5 Oct 16 13:34 instance-0002
>>> ...
>>> drwxrwxr-x   2 nova nova5 Nov 26 17:38 instance-005c
>>> drwxrwxr-x   2 nova nova6 Dec 11 15:38 instance-0065
>>>
>>> If you have a shared storage for that folder, then your fstab entry
>>> looks like this one:
>>> 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs
>>> defaults 0 0
>>>
>>> So, I think that it could be possible to implement something like
>>> 'storage domains', but tenant/project oriented. Instead of having multiple
>>> generic mountpoints, each tenant would have a private mountpoint for
>>> his/her instances. So the /var/lib/nova/instances could look like this
>>> sample:
>>>
>>> /instances
>>> +/tenantID1
>>> ++/instance-X
>>> ++/instance-Y
>>> ++/instance-Z
>>> +/tenantID2
>>> ++/instance-A
>>> ++/instance-B
>>> ++/instance-C
>>> ...
>>> +/tenantIDN
>>> ++/instance-A
>>> ++/instance-B
>>> ++/instance-C
>>>
>>> And in the /etc/fstab something like this sample too:
>>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1
>>> /var/lib/nova/instances/tenantID1 nfs defaults 0 0
>>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 
>>> /var/lib/nova/instances/tenantID2 nfs
>>> defaults 0 0
>>> ...
>>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN
>>> /var/lib/nova/instances/tenantIDN nfs defaults 0 0
>>>
>>> With this approach, we could have something like per tenant QoS on
>>> shared storage to resell differente storage capabilities on a tenant basis.
>>>
>>> I would love to hear feedback, drawback, improvements...
>>>
>>> Cheers
>>> Diego
>>>
>>>  --
>>> Diego Parrilla
>>> *CEO*
>>> *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29|
>>> skype:diegoparrilla*
>>> * 
>>> *
>>>
>>> *
>>>
>>>
>>>
>>>
>>> On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway wrote:
>>>
 Good plan.


 https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains


 On Dec 20, 2012, at 4:25 PM, David Busby wrote:

 > I may of course be entirely wrong :) which would be cool if this is
 achievable / on the roadmap.
 >
 > At the very least if this is not already in discussion I'd raise it
 on launchpad as a potential feature.
 >
 >
 >
 >
 > On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway 
 wrote:
 > Ah shame. You can specify different storage domains in oVirt.
 >
 > On Dec 20, 2

Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread John Griffith
On Thu, Dec 20, 2012 at 9:37 AM, JuanFra Rodriguez Cardoso <
juanfra.rodriguez.card...@gmail.com> wrote:

> Yes, I'm really agree with Diego.
> It would be a good choice for submitting a blueprint with this storage
> feature based on tenants.
>
> According to current quotas control, it limits the:
>
>-
>
>Number of volumes which may be created
>-
>
>Total size of all volumes within a project as measured in GB
>-
>
>Number of instances which may be launched
>-
>
>Number of processor cores which may be allocated
>- Publicly accessible IP addresses
>
>
> Another new feature related to shared storage we had thought about, it's
> to include an option for choosing if an instance has to be replicated or
> not, i.e. in a MooseFS scenario, to indicate goal (number of replicas).
> It's useful for example in testing or demo projects, where HA is not
> required.
>
> Regards,
>
> JuanFra.
>
> 2012/12/20 Diego Parrilla Santamaría 
>
>> mmm... not sure if the concept of oVirt multiple storage domains is
>> something that can be implemented in Nova as it is, but I would like to
>> share my thoughts because it's something that -from my point of view-
>> matters.
>>
>> If you want to change the folder where the nova instances are stored you
>> have to modify the option in nova-compute.conf  'instances_path':
>>
>> If you look at that folder (/var/lib/nova/instances/ by default) you will
>> see a structure like this:
>>
>> drwxrwxr-x   2 nova nova   73 Dec  4 12:16 _base
>> drwxrwxr-x   2 nova nova5 Oct 16 13:34 instance-0002
>> ...
>> drwxrwxr-x   2 nova nova5 Nov 26 17:38 instance-005c
>> drwxrwxr-x   2 nova nova6 Dec 11 15:38 instance-0065
>>
>> If you have a shared storage for that folder, then your fstab entry looks
>> like this one:
>> 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs
>> defaults 0 0
>>
>> So, I think that it could be possible to implement something like
>> 'storage domains', but tenant/project oriented. Instead of having multiple
>> generic mountpoints, each tenant would have a private mountpoint for
>> his/her instances. So the /var/lib/nova/instances could look like this
>> sample:
>>
>> /instances
>> +/tenantID1
>> ++/instance-X
>> ++/instance-Y
>> ++/instance-Z
>> +/tenantID2
>> ++/instance-A
>> ++/instance-B
>> ++/instance-C
>> ...
>> +/tenantIDN
>> ++/instance-A
>> ++/instance-B
>> ++/instance-C
>>
>> And in the /etc/fstab something like this sample too:
>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1
>> /var/lib/nova/instances/tenantID1 nfs defaults 0 0
>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 
>> /var/lib/nova/instances/tenantID2 nfs
>> defaults 0 0
>> ...
>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN
>> /var/lib/nova/instances/tenantIDN nfs defaults 0 0
>>
>> With this approach, we could have something like per tenant QoS on shared
>> storage to resell differente storage capabilities on a tenant basis.
>>
>> I would love to hear feedback, drawback, improvements...
>>
>> Cheers
>> Diego
>>
>>  --
>> Diego Parrilla
>> *CEO*
>> *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
>> skype:diegoparrilla*
>> * 
>> *
>>
>> *
>>
>>
>>
>>
>> On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway wrote:
>>
>>> Good plan.
>>>
>>>
>>> https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains
>>>
>>>
>>> On Dec 20, 2012, at 4:25 PM, David Busby wrote:
>>>
>>> > I may of course be entirely wrong :) which would be cool if this is
>>> achievable / on the roadmap.
>>> >
>>> > At the very least if this is not already in discussion I'd raise it on
>>> launchpad as a potential feature.
>>> >
>>> >
>>> >
>>> >
>>> > On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway 
>>> wrote:
>>> > Ah shame. You can specify different storage domains in oVirt.
>>> >
>>> > On Dec 20, 2012, at 4:16 PM, David Busby wrote:
>>> >
>>> > > Hi Andrew,
>>> > >
>>> > > An interesting idea, but I am unaware if nova supports storage
>>> affinity in any way, it does support host affinity iirc, as a kludge you
>>> could have say some nova compute nodes using your "slow mount" and reserve
>>> the "fast mount" nodes as required, perhaps even defining separate zones
>>> for deployment?
>>> > >
>>> > > Cheers
>>> > >
>>> > > David
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway <
>>> a.hol...@syseleven.de> wrote:
>>> > > Hi David,
>>> > >
>>> > > It is for nova.
>>> > >
>>> > > Im not sure I understand. I want to be able to say to openstack;
>>> "openstack, please install this instance (A) on this mountpoint and please
>>> install this instance (B) on this other mountpoint." I am planning on
>>> having two NFS / Gluster based stores, a fast one and a slow one.
>>> > >
>>> > > I probably will not want to say please every time :)
>>> > >
>>> > > Thanks,
>>> > >
>>> > > Andrew
>>> 

Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread JuanFra Rodriguez Cardoso
Yes, I'm really agree with Diego.
It would be a good choice for submitting a blueprint with this storage
feature based on tenants.

According to current quotas control, it limits the:

   -

   Number of volumes which may be created
   -

   Total size of all volumes within a project as measured in GB
   -

   Number of instances which may be launched
   -

   Number of processor cores which may be allocated
   - Publicly accessible IP addresses


Another new feature related to shared storage we had thought about, it's to
include an option for choosing if an instance has to be replicated or not,
i.e. in a MooseFS scenario, to indicate goal (number of replicas). It's
useful for example in testing or demo projects, where HA is not required.

Regards,

JuanFra.

2012/12/20 Diego Parrilla Santamaría 

> mmm... not sure if the concept of oVirt multiple storage domains is
> something that can be implemented in Nova as it is, but I would like to
> share my thoughts because it's something that -from my point of view-
> matters.
>
> If you want to change the folder where the nova instances are stored you
> have to modify the option in nova-compute.conf  'instances_path':
>
> If you look at that folder (/var/lib/nova/instances/ by default) you will
> see a structure like this:
>
> drwxrwxr-x   2 nova nova   73 Dec  4 12:16 _base
> drwxrwxr-x   2 nova nova5 Oct 16 13:34 instance-0002
> ...
> drwxrwxr-x   2 nova nova5 Nov 26 17:38 instance-005c
> drwxrwxr-x   2 nova nova6 Dec 11 15:38 instance-0065
>
> If you have a shared storage for that folder, then your fstab entry looks
> like this one:
> 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs
> defaults 0 0
>
> So, I think that it could be possible to implement something like 'storage
> domains', but tenant/project oriented. Instead of having multiple generic
> mountpoints, each tenant would have a private mountpoint for his/her
> instances. So the /var/lib/nova/instances could look like this sample:
>
> /instances
> +/tenantID1
> ++/instance-X
> ++/instance-Y
> ++/instance-Z
> +/tenantID2
> ++/instance-A
> ++/instance-B
> ++/instance-C
> ...
> +/tenantIDN
> ++/instance-A
> ++/instance-B
> ++/instance-C
>
> And in the /etc/fstab something like this sample too:
> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1
> /var/lib/nova/instances/tenantID1 nfs defaults 0 0
> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 
> /var/lib/nova/instances/tenantID2 nfs
> defaults 0 0
> ...
> 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN
> /var/lib/nova/instances/tenantIDN nfs defaults 0 0
>
> With this approach, we could have something like per tenant QoS on shared
> storage to resell differente storage capabilities on a tenant basis.
>
> I would love to hear feedback, drawback, improvements...
>
> Cheers
> Diego
>
>  --
> Diego Parrilla
> *CEO*
> *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
> skype:diegoparrilla*
> * 
> *
>
> *
>
>
>
>
> On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway wrote:
>
>> Good plan.
>>
>>
>> https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains
>>
>>
>> On Dec 20, 2012, at 4:25 PM, David Busby wrote:
>>
>> > I may of course be entirely wrong :) which would be cool if this is
>> achievable / on the roadmap.
>> >
>> > At the very least if this is not already in discussion I'd raise it on
>> launchpad as a potential feature.
>> >
>> >
>> >
>> >
>> > On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway 
>> wrote:
>> > Ah shame. You can specify different storage domains in oVirt.
>> >
>> > On Dec 20, 2012, at 4:16 PM, David Busby wrote:
>> >
>> > > Hi Andrew,
>> > >
>> > > An interesting idea, but I am unaware if nova supports storage
>> affinity in any way, it does support host affinity iirc, as a kludge you
>> could have say some nova compute nodes using your "slow mount" and reserve
>> the "fast mount" nodes as required, perhaps even defining separate zones
>> for deployment?
>> > >
>> > > Cheers
>> > >
>> > > David
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway 
>> wrote:
>> > > Hi David,
>> > >
>> > > It is for nova.
>> > >
>> > > Im not sure I understand. I want to be able to say to openstack;
>> "openstack, please install this instance (A) on this mountpoint and please
>> install this instance (B) on this other mountpoint." I am planning on
>> having two NFS / Gluster based stores, a fast one and a slow one.
>> > >
>> > > I probably will not want to say please every time :)
>> > >
>> > > Thanks,
>> > >
>> > > Andrew
>> > >
>> > > On Dec 20, 2012, at 3:42 PM, David Busby wrote:
>> > >
>> > > > Hi Andrew,
>> > > >
>> > > > Is this for glance or nova ?
>> > > >
>> > > > For nova change:
>> > > >
>> > > > state_path = /var/lib/nova
>> > > > lock_path = /var/lib/nova/tmp
>> > > >
>> > > > in your nova.conf
>> > > >
>> > > > For glance I

Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Diego Parrilla Santamaría
mmm... not sure if the concept of oVirt multiple storage domains is
something that can be implemented in Nova as it is, but I would like to
share my thoughts because it's something that -from my point of view-
matters.

If you want to change the folder where the nova instances are stored you
have to modify the option in nova-compute.conf  'instances_path':

If you look at that folder (/var/lib/nova/instances/ by default) you will
see a structure like this:

drwxrwxr-x   2 nova nova   73 Dec  4 12:16 _base
drwxrwxr-x   2 nova nova5 Oct 16 13:34 instance-0002
...
drwxrwxr-x   2 nova nova5 Nov 26 17:38 instance-005c
drwxrwxr-x   2 nova nova6 Dec 11 15:38 instance-0065

If you have a shared storage for that folder, then your fstab entry looks
like this one:
10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs
defaults 0 0

So, I think that it could be possible to implement something like 'storage
domains', but tenant/project oriented. Instead of having multiple generic
mountpoints, each tenant would have a private mountpoint for his/her
instances. So the /var/lib/nova/instances could look like this sample:

/instances
+/tenantID1
++/instance-X
++/instance-Y
++/instance-Z
+/tenantID2
++/instance-A
++/instance-B
++/instance-C
...
+/tenantIDN
++/instance-A
++/instance-B
++/instance-C

And in the /etc/fstab something like this sample too:
10.15.100.3:/volumes/vol1/zone1/instances/tenantID1
/var/lib/nova/instances/tenantID1 nfs defaults 0 0
10.15.100.3:/volumes/vol1/zone1/instances/tenantID2
/var/lib/nova/instances/tenantID2 nfs
defaults 0 0
...
10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN
/var/lib/nova/instances/tenantIDN nfs defaults 0 0

With this approach, we could have something like per tenant QoS on shared
storage to resell differente storage capabilities on a tenant basis.

I would love to hear feedback, drawback, improvements...

Cheers
Diego

 --
Diego Parrilla
*CEO*
*www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
skype:diegoparrilla*
* 
*

*




On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway wrote:

> Good plan.
>
>
> https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains
>
>
> On Dec 20, 2012, at 4:25 PM, David Busby wrote:
>
> > I may of course be entirely wrong :) which would be cool if this is
> achievable / on the roadmap.
> >
> > At the very least if this is not already in discussion I'd raise it on
> launchpad as a potential feature.
> >
> >
> >
> >
> > On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway 
> wrote:
> > Ah shame. You can specify different storage domains in oVirt.
> >
> > On Dec 20, 2012, at 4:16 PM, David Busby wrote:
> >
> > > Hi Andrew,
> > >
> > > An interesting idea, but I am unaware if nova supports storage
> affinity in any way, it does support host affinity iirc, as a kludge you
> could have say some nova compute nodes using your "slow mount" and reserve
> the "fast mount" nodes as required, perhaps even defining separate zones
> for deployment?
> > >
> > > Cheers
> > >
> > > David
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway 
> wrote:
> > > Hi David,
> > >
> > > It is for nova.
> > >
> > > Im not sure I understand. I want to be able to say to openstack;
> "openstack, please install this instance (A) on this mountpoint and please
> install this instance (B) on this other mountpoint." I am planning on
> having two NFS / Gluster based stores, a fast one and a slow one.
> > >
> > > I probably will not want to say please every time :)
> > >
> > > Thanks,
> > >
> > > Andrew
> > >
> > > On Dec 20, 2012, at 3:42 PM, David Busby wrote:
> > >
> > > > Hi Andrew,
> > > >
> > > > Is this for glance or nova ?
> > > >
> > > > For nova change:
> > > >
> > > > state_path = /var/lib/nova
> > > > lock_path = /var/lib/nova/tmp
> > > >
> > > > in your nova.conf
> > > >
> > > > For glance I'm unsure, may be easier to just mount gluster right
> onto /var/lib/glance (similarly could do the same for /var/lib/nova).
> > > >
> > > > And just my £0.02 I've had no end of problems getting gluster to
> "play nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried
> glusterfs, tried 2 replica N distribute setups with many a random glusterfs
> death), as such I have opted for using ceph.
> > > >
> > > > ceph's rados can also be used with cinder from the brief reading
> I've been doing into it.
> > > >
> > > >
> > > > Cheers
> > > >
> > > > David
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway <
> a.hol...@syseleven.de> wrote:
> > > > Hi,
> > > >
> > > > If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount
> can I control where openstack puts the disk files?
> > > >
> > > > Thanks,
> > > >
> > > > Andrew
> > > >
> > > > ___
> > > > Mailing list: https://launchpad.net/~openstack
> > > 

Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Good plan.

https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains


On Dec 20, 2012, at 4:25 PM, David Busby wrote:

> I may of course be entirely wrong :) which would be cool if this is 
> achievable / on the roadmap.
> 
> At the very least if this is not already in discussion I'd raise it on 
> launchpad as a potential feature.
> 
> 
> 
> 
> On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway  wrote:
> Ah shame. You can specify different storage domains in oVirt.
> 
> On Dec 20, 2012, at 4:16 PM, David Busby wrote:
> 
> > Hi Andrew,
> >
> > An interesting idea, but I am unaware if nova supports storage affinity in 
> > any way, it does support host affinity iirc, as a kludge you could have say 
> > some nova compute nodes using your "slow mount" and reserve the "fast 
> > mount" nodes as required, perhaps even defining separate zones for 
> > deployment?
> >
> > Cheers
> >
> > David
> >
> >
> >
> >
> >
> > On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway  
> > wrote:
> > Hi David,
> >
> > It is for nova.
> >
> > Im not sure I understand. I want to be able to say to openstack; 
> > "openstack, please install this instance (A) on this mountpoint and please 
> > install this instance (B) on this other mountpoint." I am planning on 
> > having two NFS / Gluster based stores, a fast one and a slow one.
> >
> > I probably will not want to say please every time :)
> >
> > Thanks,
> >
> > Andrew
> >
> > On Dec 20, 2012, at 3:42 PM, David Busby wrote:
> >
> > > Hi Andrew,
> > >
> > > Is this for glance or nova ?
> > >
> > > For nova change:
> > >
> > > state_path = /var/lib/nova
> > > lock_path = /var/lib/nova/tmp
> > >
> > > in your nova.conf
> > >
> > > For glance I'm unsure, may be easier to just mount gluster right onto 
> > > /var/lib/glance (similarly could do the same for /var/lib/nova).
> > >
> > > And just my £0.02 I've had no end of problems getting gluster to "play 
> > > nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, 
> > > tried 2 replica N distribute setups with many a random glusterfs death), 
> > > as such I have opted for using ceph.
> > >
> > > ceph's rados can also be used with cinder from the brief reading I've 
> > > been doing into it.
> > >
> > >
> > > Cheers
> > >
> > > David
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway  
> > > wrote:
> > > Hi,
> > >
> > > If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
> > > control where openstack puts the disk files?
> > >
> > > Thanks,
> > >
> > > Andrew
> > >
> > > ___
> > > Mailing list: https://launchpad.net/~openstack
> > > Post to : openstack@lists.launchpad.net
> > > Unsubscribe : https://launchpad.net/~openstack
> > > More help   : https://help.launchpad.net/ListHelp
> > >
> >
> >
> >
> 
> 
> 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread David Busby
I may of course be entirely wrong :) which would be cool if this
is achievable / on the roadmap.

At the very least if this is not already in discussion I'd raise it on
launchpad as a potential feature.




On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway wrote:

> Ah shame. You can specify different storage domains in oVirt.
>
> On Dec 20, 2012, at 4:16 PM, David Busby wrote:
>
> > Hi Andrew,
> >
> > An interesting idea, but I am unaware if nova supports storage affinity
> in any way, it does support host affinity iirc, as a kludge you could have
> say some nova compute nodes using your "slow mount" and reserve the "fast
> mount" nodes as required, perhaps even defining separate zones for
> deployment?
> >
> > Cheers
> >
> > David
> >
> >
> >
> >
> >
> > On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway 
> wrote:
> > Hi David,
> >
> > It is for nova.
> >
> > Im not sure I understand. I want to be able to say to openstack;
> "openstack, please install this instance (A) on this mountpoint and please
> install this instance (B) on this other mountpoint." I am planning on
> having two NFS / Gluster based stores, a fast one and a slow one.
> >
> > I probably will not want to say please every time :)
> >
> > Thanks,
> >
> > Andrew
> >
> > On Dec 20, 2012, at 3:42 PM, David Busby wrote:
> >
> > > Hi Andrew,
> > >
> > > Is this for glance or nova ?
> > >
> > > For nova change:
> > >
> > > state_path = /var/lib/nova
> > > lock_path = /var/lib/nova/tmp
> > >
> > > in your nova.conf
> > >
> > > For glance I'm unsure, may be easier to just mount gluster right onto
> /var/lib/glance (similarly could do the same for /var/lib/nova).
> > >
> > > And just my £0.02 I've had no end of problems getting gluster to "play
> nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs,
> tried 2 replica N distribute setups with many a random glusterfs death), as
> such I have opted for using ceph.
> > >
> > > ceph's rados can also be used with cinder from the brief reading I've
> been doing into it.
> > >
> > >
> > > Cheers
> > >
> > > David
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway 
> wrote:
> > > Hi,
> > >
> > > If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount
> can I control where openstack puts the disk files?
> > >
> > > Thanks,
> > >
> > > Andrew
> > >
> > > ___
> > > Mailing list: https://launchpad.net/~openstack
> > > Post to : openstack@lists.launchpad.net
> > > Unsubscribe : https://launchpad.net/~openstack
> > > More help   : https://help.launchpad.net/ListHelp
> > >
> >
> >
> >
>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Ah shame. You can specify different storage domains in oVirt.

On Dec 20, 2012, at 4:16 PM, David Busby wrote:

> Hi Andrew,
> 
> An interesting idea, but I am unaware if nova supports storage affinity in 
> any way, it does support host affinity iirc, as a kludge you could have say 
> some nova compute nodes using your "slow mount" and reserve the "fast mount" 
> nodes as required, perhaps even defining separate zones for deployment?
> 
> Cheers
> 
> David
> 
> 
> 
> 
> 
> On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway  wrote:
> Hi David,
> 
> It is for nova.
> 
> Im not sure I understand. I want to be able to say to openstack; "openstack, 
> please install this instance (A) on this mountpoint and please install this 
> instance (B) on this other mountpoint." I am planning on having two NFS / 
> Gluster based stores, a fast one and a slow one.
> 
> I probably will not want to say please every time :)
> 
> Thanks,
> 
> Andrew
> 
> On Dec 20, 2012, at 3:42 PM, David Busby wrote:
> 
> > Hi Andrew,
> >
> > Is this for glance or nova ?
> >
> > For nova change:
> >
> > state_path = /var/lib/nova
> > lock_path = /var/lib/nova/tmp
> >
> > in your nova.conf
> >
> > For glance I'm unsure, may be easier to just mount gluster right onto 
> > /var/lib/glance (similarly could do the same for /var/lib/nova).
> >
> > And just my £0.02 I've had no end of problems getting gluster to "play 
> > nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, 
> > tried 2 replica N distribute setups with many a random glusterfs death), as 
> > such I have opted for using ceph.
> >
> > ceph's rados can also be used with cinder from the brief reading I've been 
> > doing into it.
> >
> >
> > Cheers
> >
> > David
> >
> >
> >
> >
> >
> > On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway  
> > wrote:
> > Hi,
> >
> > If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
> > control where openstack puts the disk files?
> >
> > Thanks,
> >
> > Andrew
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> 
> 
> 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread David Busby
Hi Andrew,

An interesting idea, but I am unaware if nova supports storage affinity in
any way, it does support host affinity iirc, as a kludge you could have say
some nova compute nodes using your "slow mount" and reserve the "fast
mount" nodes as required, perhaps even defining separate zones for
deployment?

Cheers

David





On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway wrote:

> Hi David,
>
> It is for nova.
>
> Im not sure I understand. I want to be able to say to openstack;
> "openstack, please install this instance (A) on this mountpoint and please
> install this instance (B) on this other mountpoint." I am planning on
> having two NFS / Gluster based stores, a fast one and a slow one.
>
> I probably will not want to say please every time :)
>
> Thanks,
>
> Andrew
>
> On Dec 20, 2012, at 3:42 PM, David Busby wrote:
>
> > Hi Andrew,
> >
> > Is this for glance or nova ?
> >
> > For nova change:
> >
> > state_path = /var/lib/nova
> > lock_path = /var/lib/nova/tmp
> >
> > in your nova.conf
> >
> > For glance I'm unsure, may be easier to just mount gluster right onto
> /var/lib/glance (similarly could do the same for /var/lib/nova).
> >
> > And just my £0.02 I've had no end of problems getting gluster to "play
> nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs,
> tried 2 replica N distribute setups with many a random glusterfs death), as
> such I have opted for using ceph.
> >
> > ceph's rados can also be used with cinder from the brief reading I've
> been doing into it.
> >
> >
> > Cheers
> >
> > David
> >
> >
> >
> >
> >
> > On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway 
> wrote:
> > Hi,
> >
> > If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can
> I control where openstack puts the disk files?
> >
> > Thanks,
> >
> > Andrew
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Hi David,

It is for nova. 

Im not sure I understand. I want to be able to say to openstack; "openstack, 
please install this instance (A) on this mountpoint and please install this 
instance (B) on this other mountpoint." I am planning on having two NFS / 
Gluster based stores, a fast one and a slow one.

I probably will not want to say please every time :)

Thanks,

Andrew

On Dec 20, 2012, at 3:42 PM, David Busby wrote:

> Hi Andrew,
> 
> Is this for glance or nova ?
> 
> For nova change:
> 
> state_path = /var/lib/nova
> lock_path = /var/lib/nova/tmp
> 
> in your nova.conf
> 
> For glance I'm unsure, may be easier to just mount gluster right onto 
> /var/lib/glance (similarly could do the same for /var/lib/nova).
> 
> And just my £0.02 I've had no end of problems getting gluster to "play nice" 
> on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 
> replica N distribute setups with many a random glusterfs death), as such I 
> have opted for using ceph.
> 
> ceph's rados can also be used with cinder from the brief reading I've been 
> doing into it.
> 
> 
> Cheers
> 
> David
> 
> 
> 
> 
> 
> On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway  wrote:
> Hi,
> 
> If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
> control where openstack puts the disk files?
> 
> Thanks,
> 
> Andrew
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread David Busby
Hi Andrew,

Is this for glance or nova ?

For nova change:

state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp

in your nova.conf

For glance I'm unsure, may be easier to just mount gluster right onto
/var/lib/glance (similarly could do the same for /var/lib/nova).

And just my £0.02 I've had no end of problems getting gluster to "play
nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs,
tried 2 replica N distribute setups with many a random glusterfs death), as
such I have opted for using ceph.

ceph's rados can also be used with cinder from the brief reading I've been
doing into it.


Cheers

David





On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway wrote:

> Hi,
>
> If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I
> control where openstack puts the disk files?
>
> Thanks,
>
> Andrew
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Hi,

If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
control where openstack puts the disk files?

Thanks,

Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp