Hi Marlok, I agree with what you said, it would be much better to remove the requirement of CLVM, but I have a couple of concerns. Let's see if we can find a solution to that together and decide if we can squeeze this one in for OpenNebula 4.0. Let me explain you the main problem I see.
So, the main concern here is that in the shared_lvm TM drivers, you're operating directly on the frontend, and I think that will pose a problem if you want to have multiple LVM based datastores. I don't think it's a correct approach to force the OpenNebula admins to use the OpenNebula frontend as their LVM primary node. If you take a look at the DS/LVM drivers in the 'master' branch, you'll see that we extract the LVM primary node's hostname via the 'HOST' attribute of the datastore template [1] The problem is, that we don't have that information in the TM drivers. A solution would be to execute "onedatastore <id>" and extract the datatostore's LVM primary node for each operation: cp, ln, mvds, etc... One we have we have the name of the lvm primary node, we could follow your approach: operate (clone, create, remove, etc...) in that node and refresh it the nodes via lvscan and lvchange... any thoughts? [1] https://github.com/OpenNebula/one/blob/master/src/datastore_mad/remotes/lvm/clone#L56 cheers, Jaime On Tue, Mar 12, 2013 at 11:41 AM, Marlok Tamás <[email protected]> wrote: > Hi Jaime, > > I examined the new lvm driver and compared it with shared_lvm. > I think, the only difference between them, is that in shared_lvm all of > the lvm commands are executed on the frontend (as we described in here: > http://wiki.opennebula.org/shared_lvm). > I would like to suggest, that you should consider to adapt this behaviour > in the new lvm driver. > (Or in the case of the one driver, the HOST configured in the datastore > config should execute this commands, but not the host, running the VM.) > > I can't see any drawback of this approach, but in this way, CLVM wouldn't > be necessary (refer the wiki article). > > We couldn't use CLVM for various reasons, but it would be much easier for > us (I think not just for us) if we could use the LVM driver already > implemented in OpenNebula, instead of shared_lvm. > > I see, that in the datastore driver scripts this is the current behaviour. > So only the tm scripts should be modified. > > If you can agree with this, but don't have the time for implementing, just > let me know, I can create a patch for you. Or if I am wrong let me know > that too :). > > Cheers, > Tamas > > > On Thu, Feb 21, 2013 at 6:52 PM, Jaime Melis <[email protected]>wrote: > >> Oops, you're right. >> >> Ok, so, can you test the upcoming LVM drivers for OpenNebula 4.0? It >> replaces snapshotting with cloning... >> >> just replace /var/lib/one/remotes/tm/lvm >> with the contents of: >> https://github.com/OpenNebula/one/tree/master/src/tm_mad/lvm >> >> and do "onehost sync" afterwards? >> >> >> On Thu, Feb 21, 2013 at 6:41 PM, Tobias Honacker >> <[email protected]>wrote: >> >> > Hi Jaime, >> > >> > yes you are right. I posted the error log file with TM_MAD="lvm", too. I >> > tried both TM_MAD=shared and lvm. >> > I will try the links Mihály Héder already posted. But maybe you have got >> > any other ideas? >> > >> > >> > Best regards, >> > Tobias >> > >> > >> > Am 21.02.2013 um 18:37 schrieb Jaime Melis: >> > >> > Actually, it's "onedatastore update 100", not "onetemplate"... sorry >> > >> > >> > On Thu, Feb 21, 2013 at 6:22 PM, Jaime Melis <[email protected] >> >wrote: >> > >> >> Hi Tobias, >> >> >> >> your datastore template has the wrong TM_MAD. It should be TM_MAD="lvm" >> >> instead of TM_MAD="shared". >> >> >> >> You can fix it by doing "onetemplate update 100" and editing it >> in-place >> > > -- Jaime Melis Project Engineer OpenNebula - The Open Source Toolkit for Cloud Computing www.OpenNebula.org | [email protected]
_______________________________________________ Users mailing list [email protected] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
