Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-13 Thread Sahina Bose
On Wed, Apr 12, 2017 at 11:15 PM, Jamie Lawrence 
wrote:

>
> > On Apr 12, 2017, at 1:31 AM, Evgenia Tokar  wrote:
> >
> > Hi Jamie,
> >
> > Are you trying to setup hosted engine using the "hosted-engine --deploy"
> command, or are you trying to migrate existing he vm?
> >
> > For hosted engine setup you need to provide a clean storage domain,
> which is not a part of your 4.1 setup, this storage domain will be used for
> the hosted engine and will be visible in the UI once the deployment of the
> hosted engine is complete.
> > If your storage domain appears in the UI it means that it is already
> connected to the storage pool and is not "clean”.
>
> Hi Jenny,
>
> Thanks for the response.
>
> I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts
> have been with an answerfile, but the responses are the same.)
>
> I think I may have been unclear.  I understand that it wants an unmolested
> SD. There just doesn’t seem to be a path to provide that with an
> Ovirt-managed Gluster cluster.
>
> I guess my question is how to provide that with an Ovirt-managed gluster
> installation. Or a different way of asking, I guess, would be how do I make
> Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can
> pick it up? I don’t see any options to tell the Gluster cluster to not
> auto-discover or similar. So as soon as I create it, the non-hosted engine
> picks it up. This happens within seconds - I vainly tried to time it with
> running the installer.
>
> This is why I mentioned dismissing the idea of using another Gluster
> installation, unattached to Ovirt. That’s the only way I could think of to
> give it a clean pool. (I dismissed it because I can’t run this in
> production with that sort of dependency.)
>
> Do I need to take this Gluster cluster out of Ovirt control (delete the
> Gluster cluster from the Ovirt GUI, recreate outside of Ovirt manually),
> install on to that, and then re-associate it in the GUI or something
> similar?
>

The gluster cluster being detected in Ovirt does not make it a dirty
storage domain. It looks like the gluster volume was previously used as
storage domain and was not cleaned up? You can try mounting the gluster
volume and check if it has any content

I'm a bit confused about the setup though  - do you already have an
installation of oVirt engine that you use to manage the gluster hosts. Are
you deploying another engine (HE) that's managing the same hosts or using
gluster volume from another installation?


> -j
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Jamie Lawrence

> On Apr 12, 2017, at 1:31 AM, Evgenia Tokar  wrote:
> 
> Hi Jamie, 
> 
> Are you trying to setup hosted engine using the "hosted-engine --deploy" 
> command, or are you trying to migrate existing he vm? 
>  
> For hosted engine setup you need to provide a clean storage domain, which is 
> not a part of your 4.1 setup, this storage domain will be used for the hosted 
> engine and will be visible in the UI once the deployment of the hosted engine 
> is complete.
> If your storage domain appears in the UI it means that it is already 
> connected to the storage pool and is not "clean”.

Hi Jenny,

Thanks for the response.

I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts have 
been with an answerfile, but the responses are the same.)

I think I may have been unclear.  I understand that it wants an unmolested SD. 
There just doesn’t seem to be a path to provide that with an Ovirt-managed 
Gluster cluster.

I guess my question is how to provide that with an Ovirt-managed gluster 
installation. Or a different way of asking, I guess, would be how do I make 
Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can pick 
it up? I don’t see any options to tell the Gluster cluster to not auto-discover 
or similar. So as soon as I create it, the non-hosted engine picks it up. This 
happens within seconds - I vainly tried to time it with running the installer.

This is why I mentioned dismissing the idea of using another Gluster 
installation, unattached to Ovirt. That’s the only way I could think of to give 
it a clean pool. (I dismissed it because I can’t run this in production with 
that sort of dependency.)

Do I need to take this Gluster cluster out of Ovirt control (delete the Gluster 
cluster from the Ovirt GUI, recreate outside of Ovirt manually), install on to 
that, and then re-associate it in the GUI or something similar?

-j
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Evgenia Tokar
Hi Jamie,

Are you trying to setup hosted engine using the "hosted-engine --deploy"
command, or are you trying to migrate existing he vm?

For hosted engine setup you need to provide a clean storage domain, which
is not a part of your 4.1 setup, this storage domain will be used for the
hosted engine and will be visible in the UI once the deployment of the
hosted engine is complete.
If your storage domain appears in the UI it means that it is already
connected to the storage pool and is not "clean".

Thanks,
Jenny

On Wed, Apr 12, 2017 at 2:47 AM, Jamie Lawrence 
wrote:

> Or at least, refusing to mount a dirty pool.
>
> I have 4.1 set up, configured and functional, currently wired up with two
> VM hosts and three Gluster hosts. It is configured with a (temporary) NFS
> data storage domain, with the end-goal being two data domains on Gluster;
> one for the hosted engine, one for other VMs.
>
> The issue is that `hosted-engine` sees any gluster volumes offered as
> dirty. (I have been creating them via the command line  right before
> attempting the hosted-engine migration; there is nothing in them at that
> stage.)  I *think* what is happening is that ovirt-engine notices a newly
> created volume and has its way with the volume (visible in the GUI; the
> volume appears in the list), and the hosted-engine installer becomes upset
> about that. What I don’t know is what to do about it. Relevant log lines
> below. The installer almost sounds like it is asking me to remove the
> UUID-directory and whatnot, but I’m pretty sure that’s just going to leave
> me with two problems instead of fixing the first one. I’ve considered
> attempting to wire this together in the DB, which also seems like a great
> way to break things. I’ve even thought of using a Gluster installation that
> Ovirt knows nothing about, mainly as an experiment to see if it would even
> work, but decided it doesn’t matter, because I can’t deploy in that state
> anyway and it doesn’t actually get me any closer to getting this working.
>
> I noticed several bugs in the tracker seemingly related, but the bulk of
> those were for past versions and I saw nothing that seemed actionable from
> my end in the others.
>
> So, can anyone spare a clue as to what is going wrong, and what to do
> about that?
>
> -j
>
> - - - - ovirt-hosted-engine-setup.log - - - -
>
> 2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:408 connectStorageServer
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:475 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-
> c610584dea6e'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:502 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-
> 1fd88b84fe14'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:794 _check_existing_pools
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:795 getConnectedStoragePoolsList
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:797 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:956 Creating Storage Domain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:513 createStorageDomain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:547 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:549 {'status': {'message': 'Done', 'code':
> 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree':
> u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:959 Creating Storage Pool
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:553 createFakeStorageDomain
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:570 {'status': {'message': 'Done',
> 'code': 0}}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:572 {'status': {'message': 'Done',
> 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True,
> u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:587 createStoragePool
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:627 createStoragePool(args=[
>