Re: KVM, iSCSI and High Availability

2011-03-31 Thread Guido Winkelmann
Am Monday 28 March 2011 schrieb David Martin:
 - Original Message -
 
  On 3/28/11 2:46 PM, Avi Kivity wrote:
   On 03/25/2011 10:26 PM, Marcin M. Jessa wrote:
  [...]
  
   One LUN per image allows you to implement failover, LVM doesn't (but
   cluster-LVM does). I recommend using one LUN per image; it's much
   simpler.
  
  Some people say Use one LUN, it's easier and use CLVM. Why is it
  easier to use CLVM and one LUN per virtual guest?
 
 I find it easier because i can do:
 lvcreate -n vm1 --size 40G iscsi_vg
 then virt-install or whatever
 If I were using 1 lun per vm then I would have to provision the lun, make
 ALL hosts aware of the lun, and finally screw with the multipath configs
 etc.

Don't you have basically the same problem when using LVM in one LUN? You still 
have to make all the hosts aware of the new LV manually. I don't even know LVM 
even supports this, it wasn't exactly designed for a situation where multiple 
hosts might simultaneously read and write to a volume group, let alone create 
and destroy logical volumes while the VG is in use by any number of other 
hosts...

Guido 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-31 Thread David Martin
That's what CLVM is for, it propagates the volume changes to every member of 
the 'cluster'.

David Martin

- Original Message -
 Am Monday 28 March 2011 schrieb David Martin:
  - Original Message -
 
   On 3/28/11 2:46 PM, Avi Kivity wrote:
On 03/25/2011 10:26 PM, Marcin M. Jessa wrote:
   [...]
  
One LUN per image allows you to implement failover, LVM doesn't
(but
cluster-LVM does). I recommend using one LUN per image; it's
much
simpler.
  
   Some people say Use one LUN, it's easier and use CLVM. Why is it
   easier to use CLVM and one LUN per virtual guest?
 
  I find it easier because i can do:
  lvcreate -n vm1 --size 40G iscsi_vg
  then virt-install or whatever
  If I were using 1 lun per vm then I would have to provision the lun,
  make
  ALL hosts aware of the lun, and finally screw with the multipath
  configs
  etc.
 
 Don't you have basically the same problem when using LVM in one LUN?
 You still
 have to make all the hosts aware of the new LV manually. I don't even
 know LVM
 even supports this, it wasn't exactly designed for a situation where
 multiple
 hosts might simultaneously read and write to a volume group, let alone
 create
 and destroy logical volumes while the VG is in use by any number of
 other
 hosts...
 
 Guido
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-31 Thread Guido Winkelmann
Am Thursday 31 March 2011 schrieben Sie:
 That's what CLVM is for, it propagates the volume changes to every member
 of the 'cluster'.

Oh, right. I didn't know about clvm until now.

It sounds very promising though, certainly better than working with the 
proprietary API of whoever your SAN-vendor is to create a new LUN for every VM. 
Also, the machine we have got here, a Dell PowerVault, appears to be limited to 
at most 255 LUNs. I don't if that's a limitation of iSCSI or just a problem of 
this particular array.

Guido
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-28 Thread Avi Kivity

On 03/25/2011 10:26 PM, Marcin M. Jessa wrote:

Hi.

Over the last several days I've been reading, asking questions, 
searching the Internet to find a viable HA stack for Ubuntu with KVM 
virtualization and shared iSCSI storage. And I'm nearly as confused as 
when I started.


Basically I'm trying to build a KVM enviroment with an iSCSI SAN and 
I'm not quite sure what approach to use for storing the virtual guests.
What I understand to get max speed I should install directly to iSCSI 
exported raw devices instead of backing disks.
I'm not sure creating many small LUNs, one for each of the guests is a 
good idea.
Would it be better to create just one big LUN and then use LVM to 
devide it and assign one chunk for each of the guests?
In the same setup I would also like to implement some kind of 
automatic failover so if one of the KVM hosts is down I could 
automatically move guests over to the other one. Or just perform live 
migration and move one of the guest over to a different host with 
spare capacity.

What would be the best approach to implement a solution like that?



One LUN per image allows you to implement failover, LVM doesn't (but 
cluster-LVM does).  I recommend using one LUN per image; it's much simpler.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-28 Thread David Martin
- Original Message -
 - Original Message -
  On 03/25/2011 10:26 PM, Marcin M. Jessa wrote:
   Hi.
  
   Over the last several days I've been reading, asking questions,
   searching the Internet to find a viable HA stack for Ubuntu with
   KVM
   virtualization and shared iSCSI storage. And I'm nearly as
   confused
   as
   when I started.
  
   Basically I'm trying to build a KVM enviroment with an iSCSI SAN
   and
   I'm not quite sure what approach to use for storing the virtual
   guests.
   What I understand to get max speed I should install directly to
   iSCSI
   exported raw devices instead of backing disks.
   I'm not sure creating many small LUNs, one for each of the guests
   is
   a
   good idea.
   Would it be better to create just one big LUN and then use LVM to
   devide it and assign one chunk for each of the guests?
   In the same setup I would also like to implement some kind of
   automatic failover so if one of the KVM hosts is down I could
   automatically move guests over to the other one. Or just perform
   live
   migration and move one of the guest over to a different host with
   spare capacity.
   What would be the best approach to implement a solution like that?
  
 
  One LUN per image allows you to implement failover, LVM doesn't (but
  cluster-LVM does). I recommend using one LUN per image; it's much
  simpler.
 
  --
  error compiling committee.c: too many arguments to function
 
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at http://vger.kernel.org/majordomo-info.html
 
 CLVM was more complicated initially but is pretty once we got through
 that. Having to hack around in the SAN manager and then going to the
 hosts to mess with the multipath configs etc gets old fast. However if
 your setup is pretty static then I guess it wouldn't matter.

Oops for to cc the list
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-28 Thread Marcin M. Jessa

On 3/28/11 2:46 PM, Avi Kivity wrote:

On 03/25/2011 10:26 PM, Marcin M. Jessa wrote:


[...]


One LUN per image allows you to implement failover, LVM doesn't (but
cluster-LVM does). I recommend using one LUN per image; it's much simpler.


Some people say Use one LUN, it's easier and use CLVM. Why is it 
easier to use CLVM and one LUN per virtual guest?




--

Marcin M. Jessa
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-28 Thread Marcin M. Jessa

On 3/28/11 6:21 PM, David Martin wrote:

[...]


CLVM was more complicated initially but is pretty once we got through
that. Having to hack around in the SAN manager and then going to the
hosts to mess with the multipath configs etc gets old fast. However if
your setup is pretty static then I guess it wouldn't matter.


So you would also use one LUN per guest?
My setup is pretty static but it is possible I would add additional 
guests and/or hosts to the setup.
What about high avaliability? Would it be reasonable to use OpenAIS + 
Pacemaker to bring up guests on a different hosts if the main hosts 
was down for maintenance or similar?

How is OCFS2 compared to CLVM?



--

Marcin M. Jessa
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-28 Thread David Martin
- Original Message -
 On 3/28/11 2:46 PM, Avi Kivity wrote:
  On 03/25/2011 10:26 PM, Marcin M. Jessa wrote:
 
 [...]
 
  One LUN per image allows you to implement failover, LVM doesn't (but
  cluster-LVM does). I recommend using one LUN per image; it's much
  simpler.
 
 Some people say Use one LUN, it's easier and use CLVM. Why is it
 easier to use CLVM and one LUN per virtual guest?
 
 
 
 --
 
 Marcin M. Jessa
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html

I find it easier because i can do:
lvcreate -n vm1 --size 40G iscsi_vg
then virt-install or whatever
If I were using 1 lun per vm then I would have to provision the lun, make ALL 
hosts aware of the lun, and finally screw with the multipath configs etc.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM, iSCSI and High Availability

2011-03-28 Thread Javier Guerra Giraldez
On Mon, Mar 28, 2011 at 3:31 PM, Marcin M. Jessa li...@yazzy.org wrote:
 How is OCFS2 compared to CLVM?

different layers, can't compare.

CLVM (aka cLVM) is the cluster version of LVM, the volume manager.
the addition of a userspace lock manager lets you do all volume
management (create/delete volumes, resize them, add/remove physical
devices, etc.) online on any machine and all others will see the
change.  since locks are only needed while modifying the volume
layout, there's no overhead during normal operation.

OCFS2 is a filesystem. specifically, a Cluster filesystem.  that means
that the same storage can be mounted by several machines and all of
them will see the same data consistently.  distributed locks are
needed for any modification, and cache strategies have to be complex
and tied to such locks.  scalability is good, since there's no central
node; but ultimately limited to the lock performance.

Usually you store cluster filesystems on cluster volumes on cluster storage.

-- 
Javier
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


KVM, iSCSI and High Availability

2011-03-25 Thread Marcin M. Jessa

Hi.

Over the last several days I've been reading, asking questions, 
searching the Internet to find a viable HA stack for Ubuntu with KVM 
virtualization and shared iSCSI storage. And I'm nearly as confused as 
when I started.


Basically I'm trying to build a KVM enviroment with an iSCSI SAN and I'm 
not quite sure what approach to use for storing the virtual guests.
What I understand to get max speed I should install directly to iSCSI 
exported raw devices instead of backing disks.
I'm not sure creating many small LUNs, one for each of the guests is a 
good idea.
Would it be better to create just one big LUN and then use LVM to devide 
it and assign one chunk for each of the guests?
In the same setup I would also like to implement some kind of automatic 
failover so if one of the KVM hosts is down I could automatically move 
guests over to the other one. Or just perform live migration and move 
one of the guest over to a different host with spare capacity.

What would be the best approach to implement a solution like that?

Thanks in advance.


--

Marcin M. Jessa
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html