Re: [ovirt-users] Master Doamin - Ovirt 3.6 - hoted engine

2016-02-26 Thread Matthew Trent
Thanks! I'm suffering from this issue as well.

Did you have to manually copy some data to the other store or anything? Or is 
this the full procedure, start to finish?

--
Matthew Trent
Network Engineer
Lewis County IT Services
360.740.1247 - Helpdesk
360.740.3343 - Direct line


From: users-boun...@ovirt.org  on behalf of Dariusz 
Kryszak 
Sent: Friday, February 26, 2016 2:12 AM
To: Simone Tiraboschi
Cc: users
Subject: Re: [ovirt-users] Master Doamin - Ovirt 3.6 - hoted engine

OK. I've made it

procedure:
1. backup hosted engine config
engine-backup --mode=backup --file=engine_`date +%Y%m%d_%H%M%S`.bck -
-log=engine_`date +%Y%m%d_%H%M%S`.log


2. on the hypervisor
hosted-engine --set-maintenance --mode=global



3. on the management host
systemctl stop ovirt-engine.service

4. on the management host

su - postgres
psql
\c engine

--- tu have to find maualy valuses but master always has
storage_domain_type=0




UPDATE storage_domain_static SET storage_domain_type=(select
storage_domain_type from storage_domain_static where
storage_name='hosted_storage') WHERE storage_name='DS_MAIN'
UPDATE storage_domain_static SET _update_date=(select _update_date from
storage_domain_static where storage_name='hosted_storage') WHERE
storage_name='DS_MAIN'

UPDATE storage_domain_static SET storage_domain_type=1 WHERE
storage_name='hosted_storage';
UPDATE storage_domain_static SET _update_date=null WHERE
storage_name='hosted_storage';

5. on the management host
systemctl start ovirt-engine.service

6. on the hypervisor
hosted-engine --set-maintenance --mode=none

end.

enjoy :-)

On Fri, 2016-02-26 at 09:17 +0100, Simone Tiraboschi wrote:
>
>
> On Thu, Feb 25, 2016 at 10:03 PM, Dariusz Kryszak <
> dariusz.krys...@gmail.com> wrote:
> >
> >
> >
> >
> > On Tue, 2016-02-23 at 17:13 +0100, Simone Tiraboschi wrote:
> > >
> > >
> > > On Tue, Feb 23, 2016 at 4:19 PM, Dariusz Kryszak <
> > > dariusz.krys...@gmail.com> wrote:
> > > > Hi folks,
> > > > I have a question about master domain when I'm using hosted
> > engine
> > > > deployment.
> > > > At the beginning I've made deployment on NUC (small home
> > > > installation) with hosted engine on the nfs share from NUC
> > host.
> > > > I've configured FS gluster on the same machine and used it for
> > > > master domain and iso domain. Lets say All-in-ONE.
> > > > After reboot happened something strange. Log says that master
> > > > domain is not available and has to become on the
> > hosted_storage.
> > > > This is not ok in my opinion. I know that behavior because
> > master
> > > > doamin is not available, has been migrated to other shareable
> > (in
> > > > this case hosted_domain is nfs ).
> > > > Do you thing, that should be locked in this particular case
> > means
> > > > when available is only hosted_storage? Right now it is not
> > possible
> > > > to change this situation because hosted engine resides on the
> > > > hosted_storage. I Can't migrate it.
> > > >
> > > >
> > > It could happen only after the hosted-engine storage domain got
> > > imported by the engine but to do that you need an additional
> > storage
> > > domain which will become the master storage domain.
> > > In the past we had a bug that let you remove the last regular
> > storage
> > > domain and it the case the hosted-engine would become the master
> > > storage domain and as you pointed out that was an issue.
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1298697
> > >
> > > Now it should be fixed. If it just happened again just because
> > you
> > > gluster regular storage domain wasn't available is not really
> > fixed.
> > > Adding Roy here.
> > > Dariusz, which release are you using?
> >
> > Regarding to the ovirt version.
> > 1. ovirt manager
> > ovirt-engine-setup - oVirt Engine Version: 3.6.2.6-1.el7.centos
> The patch that should address that issue is here:
> https://gerrit.ovirt.org/#/c/53208/
>
> But you'll find it only in 3.6.3; it wasn't available at 3.6.2.6
> time.
>
> Recovering from the condition you reached is possible but it requires
> a few manual actions.
> If your instance was almost empty redeploying is also an (probably
> easier) option.
>
>
> >  # uname -a
> > Linux ovirtm.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> > 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> > cat /etc/redhat-release
> > CentOS Linux release 7.2.1511 (Core)
> >
> >
> >
> > 2. hypervisor
> >
> > uname -a
> > Linux ovirth1.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> > 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> > cat /etc/redhat-release
> > CentOS Linux release 7.2.1511 (Core)
> >
> > rpm -qa|grep 'hosted\|vdsm'
> > vdsm-cli-4.17.18-1.el7.noarch
> > ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch
> > vdsm-xmlrpc-4.17.18-1.el7.noarch
> > vdsm-jsonrpc-4.17.18-1.el7.noarch
> > vdsm-4.17.18-1.el7.noarch
> > vdsm-python-4.17.18-1.el7.noarch
> > 

Re: [ovirt-users] OVirt Node / RHEV-H and Multiple NICs/bond

2016-02-26 Thread Christopher Young
I should note that I was able to slowly work through this by adding
the node to the engine, running the hosted-engine setup from within
the node, removing the node via the Engine's WebUI when the conflict
messages comes up, letting the hosted-engine setup finish.  That's not
ideal, but it works.  In the future, I believe a simple prompt that
allows the user to simple recognize that the node has already been
added and reinit things would be sufficient.

This does make me fearful of one thing, so if you can clarify I would
really appreciate it:

Now that I have my nodes up and in the engine, is renaming the default
cluster/datacenter/etc. going to impact the hosted-engine in any way?
I worry because my first attempt at bringing this platform up failed
on the 2nd node for the hosted-engine setup due to the 'Default'
cluster not existing.  Does the hosted-engine's HA pieces utilize this
in any way (and thus it is safe for me to rename things now that I
have them up)?

Many thanks,

Chris

On Fri, Feb 26, 2016 at 2:41 PM, Christopher Young
 wrote:
> Thanks for the advice.  This DID work, however I've run into more
> (what I consider to be) ridiculousness:
>
> Running hosted-engine setup seems to assume that the host is not
> already added to the Default cluster (had an issue previously where I
> had renamed the cluster and that messed up the hosted-engine setup
> since it appears to look for 'Default' as a cluster name - that seems
> like something it should query for vs. assuming it never changed in my
> view).
>
> So, since I've added the host already (in order to get storage NICs
> configured and be able to get to my storage in order to get to the
> hosted engine's VM disk), it won't allow me to add hosted engine to
> this node.  This is not a good experience for a customer in my view.
> If the host has already been added, then a simple prompt that asks if
> that is the case should suffice.
>
> Please don't take my comments as anything more than a customer
> experience and not just hateful complaints.  I genuinely believe in
> this product, but things like this are VERY common in the enterprise
> space and could very well scare off people who have used product where
> this process is significantly cleaner.
>
> We need to be able to create storage NICs prior to hosted-engine
> setup.  And what's more is that we need to KNOW to not add a node to
> the engine if you intend to run a hosted engine on it (and thus allow
> the hosted engine setup to add the node to the DC/cluster/etc.).  That
> should be very, very clear.
>
> Thanks,
>
> Chris
>
> On Wed, Feb 24, 2016 at 8:16 AM, Fabian Deutsch  wrote:
>> Hey Christopher,
>>
>> On Tue, Feb 23, 2016 at 8:29 PM, Christopher Young
>>  wrote:
>>> So, I have what I think should be a standard setup where I have
>>> dedicated NICs (to be bonded via LACP for storage) and well are NICs
>>> for various VLANS, etc.
>>>
>>> As typical, I have the main system interfaces for the usual system IPs
>>> (em1 in this case).
>>>
>>> A couple of observations (and a "chicken and the egg problem"):
>>>
>>> #1.  The RHEV-H/Ovirt-Node interface doesn't allow you to configure
>>> more than one interface.  Why is this?
>>
>> This is by design. The idea is that you use the TUI to configure the
>> initial NIC, all subsequent configuration will be done trough Engine.
>>
>>> #2.  This prevents me from bringing up an interface for access to my
>>> Netapp SAN (which I keep on separate networking/VLANs for
>>> best-practices purposes)
>>>
>>> If I'm unable to bring up a regular system interface AND an interface
>>> for my storage, then how am I going to be able to install a RHEV-M
>>> (engine) hosted-engine VM since I would be either unable to have an
>>> interface for this VM's IP AND be able to connect to my storage
>>> network.
>>>
>>> In short, I'm confused.  I see this as a very standard enterprise
>>> setup so I feel like I must be missing something obvious.  If someone
>>> could educate me, I'd really appreciate it.
>>>
>>
>> This is a valid point - you can not configure Ndoe from the TUI to
>> connect to more than two networks.
>>
>> What you can do however is to temporarily setup a route between the
>> two networks to bootstrap Node. After setup you can use Engine to
>> configure another nic on Node to access the storage network.
>>
>> The other option I see is to drop to shell and manually configure the
>> second NIC by creating an ifcfg file.
>>
>> Note: In future we plan that you can use Cockpit to configure
>> networking - this will also allow you to configure multiple NICs.
>>
>> Greetings
>> - fabian
>>
>>> Thanks,
>>> CHris
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> --
>> Fabian Deutsch 
>> RHEV Hypervisor
>> Red Hat
___
Users mailing list

Re: [ovirt-users] Shrinking a storage domain

2016-02-26 Thread Nir Soffer
On Fri, Feb 26, 2016 at 5:31 PM, Troels Arvin  wrote:

> Hello,
>
> I'm moving some storage from one SAN storage unit to another SAN storage
> unit (i.e.: block device storage). So I have two RHEV storage domains:
>
> SD1
> SD2
>
> We need to move most of SD1s data to SD2.
>
> Live storage migration for a guest works fine: It's easy to migrate a
> disk from being in SD1 to SD2.
>
> But now that I have released around half of SD1, I would like to let go
> of a LUN which is part of SD1. How is that done?


Sorry, this is not supported.

Your only option is to move all disks from SD1, remove SD1, and recreate it.


> (If this were a stand-
> alone Linux box, I would use vgreduce to release a PV and then remove the
> PV from the server.)
>

This should work on our vgs, but:
- It may move lvs from this pv to another pv, breaking our metadata
  (stored in the metadata lv)
- Engine will have stale pvs for this vg, this may break your setup

If this feature is important for your, please file a bug and explain the
use case,
so we can consider this for future versions.

Nir


>
> --
> Regards,
> Troels Arvin
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt Node / RHEV-H and Multiple NICs/bond

2016-02-26 Thread Christopher Young
Thanks for the advice.  This DID work, however I've run into more
(what I consider to be) ridiculousness:

Running hosted-engine setup seems to assume that the host is not
already added to the Default cluster (had an issue previously where I
had renamed the cluster and that messed up the hosted-engine setup
since it appears to look for 'Default' as a cluster name - that seems
like something it should query for vs. assuming it never changed in my
view).

So, since I've added the host already (in order to get storage NICs
configured and be able to get to my storage in order to get to the
hosted engine's VM disk), it won't allow me to add hosted engine to
this node.  This is not a good experience for a customer in my view.
If the host has already been added, then a simple prompt that asks if
that is the case should suffice.

Please don't take my comments as anything more than a customer
experience and not just hateful complaints.  I genuinely believe in
this product, but things like this are VERY common in the enterprise
space and could very well scare off people who have used product where
this process is significantly cleaner.

We need to be able to create storage NICs prior to hosted-engine
setup.  And what's more is that we need to KNOW to not add a node to
the engine if you intend to run a hosted engine on it (and thus allow
the hosted engine setup to add the node to the DC/cluster/etc.).  That
should be very, very clear.

Thanks,

Chris

On Wed, Feb 24, 2016 at 8:16 AM, Fabian Deutsch  wrote:
> Hey Christopher,
>
> On Tue, Feb 23, 2016 at 8:29 PM, Christopher Young
>  wrote:
>> So, I have what I think should be a standard setup where I have
>> dedicated NICs (to be bonded via LACP for storage) and well are NICs
>> for various VLANS, etc.
>>
>> As typical, I have the main system interfaces for the usual system IPs
>> (em1 in this case).
>>
>> A couple of observations (and a "chicken and the egg problem"):
>>
>> #1.  The RHEV-H/Ovirt-Node interface doesn't allow you to configure
>> more than one interface.  Why is this?
>
> This is by design. The idea is that you use the TUI to configure the
> initial NIC, all subsequent configuration will be done trough Engine.
>
>> #2.  This prevents me from bringing up an interface for access to my
>> Netapp SAN (which I keep on separate networking/VLANs for
>> best-practices purposes)
>>
>> If I'm unable to bring up a regular system interface AND an interface
>> for my storage, then how am I going to be able to install a RHEV-M
>> (engine) hosted-engine VM since I would be either unable to have an
>> interface for this VM's IP AND be able to connect to my storage
>> network.
>>
>> In short, I'm confused.  I see this as a very standard enterprise
>> setup so I feel like I must be missing something obvious.  If someone
>> could educate me, I'd really appreciate it.
>>
>
> This is a valid point - you can not configure Ndoe from the TUI to
> connect to more than two networks.
>
> What you can do however is to temporarily setup a route between the
> two networks to bootstrap Node. After setup you can use Engine to
> configure another nic on Node to access the storage network.
>
> The other option I see is to drop to shell and manually configure the
> second NIC by creating an ifcfg file.
>
> Note: In future we plan that you can use Cockpit to configure
> networking - this will also allow you to configure multiple NICs.
>
> Greetings
> - fabian
>
>> Thanks,
>> CHris
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Fabian Deutsch 
> RHEV Hypervisor
> Red Hat
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Shrinking a storage domain

2016-02-26 Thread Troels Arvin
Hello,

I'm moving some storage from one SAN storage unit to another SAN storage 
unit (i.e.: block device storage). So I have two RHEV storage domains:

SD1
SD2

We need to move most of SD1s data to SD2.

Live storage migration for a guest works fine: It's easy to migrate a 
disk from being in SD1 to SD2.

But now that I have released around half of SD1, I would like to let go 
of a LUN which is part of SD1. How is that done? (If this were a stand-
alone Linux box, I would use vgreduce to release a PV and then remove the 
PV from the server.)

-- 
Regards,
Troels Arvin


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] DevConf.CZ 2016 - my experience

2016-02-26 Thread Johan Vermeulen
Hello,


"Would be cool to support ARM based hosts running virtual machines!"

I would also be very interested in that.
I recently visited the Centos Dojo in Brussels and also came away with the
idea that the ARM road is very long.

This seems like an opportunity to ask:
this is not possible on any board right now?

I was thinking:
http://www.lemaker.org/product-hikey-specification.html
Hummingboard Edge
https://www.solid-run.com/freescale-imx6-family/hummingboard/hummingboard-specifications/

Greetings, J.

2016-02-18 11:45 GMT+01:00 Yaniv Kaul :

> On the first weekend of February I've had the pleaseure of attending
> DevConf.CZ 2016, which took place in the wonderful city of Brno, CZ.
> Compared to FOSDEM, it's a more relaxed, young and vibrant conference, but
> it was just as fun and rewarding from my perspective.
> Here's a disorganized personal summary:
>
> - Met community members as well as customers and Red Hat TAMs (Technical
> Account Managers) from all over Europe.
> - Attended a presentation on a research to understand theoretical best
> possible improvements to migration convergence performance[3]. I hope to
> see some of it in coming versions of QEMU, while we work on other
> improvements to this critical feature.
> - Attended a presentation on the status of ARM on Fedora and CentOS[14].
> Seems like a great positive effort, but the road is still long for complete
> support. Would be cool to support ARM based hosts running virtual machines!
> - Attended a presentation on native vs. threads implementation performance
> in QEMU[4]. The results were not conclusive, though some improvements in
> native brought it close to or better in some cases to threads. Seems that
> our default of heuristic for threads in file-based storage and native in
> block based storage is fine for the time being, but we'll watch closely for
> developments in this area.
> - Attended a presentation on debugging qemu (mainly) in OpenStack[5] - I
> think (and asked) if libvirt can do a better job - in saving in a cyclic
> log the QMP commands sent and the response from the guest and dump it in
> case of a guest crash.
> - Attended a Cockpit hackfest[9] - as we intend to use Cockpit as our UI
> for RHEVH Next Generation Node, it was great to present our use case and
> exchange thoughts, ideas and directions. Few bugs were filed during the
> hackfest per our comments.
> - Attended 'Dockerizing JBoss Applications' presentation[10]. Great work
> done by the JBoss team to improve and streamline packaging of JBoss into
> containers.
> - Attended 'Fedora Upstream Testing' presentation[13]. I disscussed with
> the presentor the possibiliy to run oVirt with Lago[17] in Fedora, in a CI
> way, by monitoring distgit changes for relevant packages (such as lvm2,
> device-mapper*, libvirt and others.
> - Attended 'High Performance VMs in OpenStack'[12] - looks like oVirt is
> already doing a lot of what OpenStack are working to achieve
> high-performance from KVM!
> - Preached on running Lago with OpenStack, Gluster and Cockpit. At least
> the first two items look really promising and I'm looking forward to a
> collaboration in those efforts. I might just try to do the Gluster one
> myself.
> - Having used the oVirt Live[18] USB DoK image in both FOSDEM and DevConf,
> I've found several items where we can improve in and filed relevant RFEs
> for them[15][16].
>
> - The oVirt team has delivered numerous presentations in the
> virtualization track[1][2][6] - and the Gluster team has additionally
> provided more presentations[7][8], which were all well received.
>
> I'd like to thank Red Hat's OSAS team members Mikey and Brian for once
> again, just one week after FOSDEM, leading the effort of representing the
> oVirt project and community in this event.
> Y.
>
> [1] https://www.youtube.com/watch?v=cQqJEiK7-Ug - Smart VM Scheduling -
> Martin Sivák
> [2] https://www.youtube.com/watch?v=V1JQtmdleaM - Host fencing in oVirt -
> Fixing the unknown and allowing VMs to be highly available - Martin Peřina
> [3] https://www.youtube.com/watch?v=XkMIMJKJeTY - in-depth look of
> virtual machine migration algorithms - Marcelo Tosatti
> [4] https://www.youtube.com/watch?v=Jx93riUF5_I - Qemu Disk I/O: Which
> performs better, Native or Threads? - Pradeep K Surisetty
> [5] https://www.youtube.com/watch?v=Dd2AGGMWXQM - Debugging the
> Virtualization Layer (libvirt and QEMU) in OpenStack - Kashyap Chamarthy
> [6] https://www.youtube.com/watch?v=4CbHTAkVDZo - Ceph integration with
> oVirt using Cinder - Nir Soffer
> [7] https://www.youtube.com/watch?v=XudYwEWQF7U - oVirt and Gluster
> Hyperconvergence - Ramesh Nachimuthu
> [8] https://www.youtube.com/watch?v=TczVVCbm8NE - Improvements in gluster
> for virtualization usecase - Prasanna Kumar Kalever
> [9] https://www.youtube.com/watch?v=TNDe90WSZow - Cockpit Hackfest -
> Dominik Perpeet, Marius Vollmer, Peter Volpe, Stef Walter
> [10] https://www.youtube.com/watch?v=NpyEoFlDzOQ - 

Re: [ovirt-users] Master Doamin - Ovirt 3.6 - hoted engine

2016-02-26 Thread Dariusz Kryszak
OK. I've made it

procedure:
1. backup hosted engine config 
engine-backup --mode=backup --file=engine_`date +%Y%m%d_%H%M%S`.bck -
-log=engine_`date +%Y%m%d_%H%M%S`.log


2. on the hypervisor 
hosted-engine --set-maintenance --mode=global



3. on the management host 
systemctl stop ovirt-engine.service

4. on the management host 

su - postgres
psql
\c engine

--- tu have to find maualy valuses but master always has
storage_domain_type=0




UPDATE storage_domain_static SET storage_domain_type=(select
storage_domain_type from storage_domain_static where
storage_name='hosted_storage') WHERE storage_name='DS_MAIN'
UPDATE storage_domain_static SET _update_date=(select _update_date from
storage_domain_static where storage_name='hosted_storage') WHERE
storage_name='DS_MAIN'

UPDATE storage_domain_static SET storage_domain_type=1 WHERE
storage_name='hosted_storage';
UPDATE storage_domain_static SET _update_date=null WHERE
storage_name='hosted_storage';

5. on the management host 
systemctl start ovirt-engine.service

6. on the hypervisor 
hosted-engine --set-maintenance --mode=none

end.

enjoy :-)

On Fri, 2016-02-26 at 09:17 +0100, Simone Tiraboschi wrote:
> 
> 
> On Thu, Feb 25, 2016 at 10:03 PM, Dariusz Kryszak <
> dariusz.krys...@gmail.com> wrote:
> > 
> > 
> > 
> > 
> > On Tue, 2016-02-23 at 17:13 +0100, Simone Tiraboschi wrote:
> > >
> > >
> > > On Tue, Feb 23, 2016 at 4:19 PM, Dariusz Kryszak <
> > > dariusz.krys...@gmail.com> wrote:
> > > > Hi folks,
> > > > I have a question about master domain when I'm using hosted
> > engine
> > > > deployment.
> > > > At the beginning I've made deployment on NUC (small home
> > > > installation) with hosted engine on the nfs share from NUC
> > host.
> > > > I've configured FS gluster on the same machine and used it for
> > > > master domain and iso domain. Lets say All-in-ONE.
> > > > After reboot happened something strange. Log says that master
> > > > domain is not available and has to become on the
> > hosted_storage.
> > > > This is not ok in my opinion. I know that behavior because
> > master
> > > > doamin is not available, has been migrated to other shareable
> > (in
> > > > this case hosted_domain is nfs ).
> > > > Do you thing, that should be locked in this particular case
> > means
> > > > when available is only hosted_storage? Right now it is not
> > possible
> > > > to change this situation because hosted engine resides on the
> > > > hosted_storage. I Can't migrate it.
> > > >
> > > >
> > > It could happen only after the hosted-engine storage domain got
> > > imported by the engine but to do that you need an additional
> > storage
> > > domain which will become the master storage domain.
> > > In the past we had a bug that let you remove the last regular
> > storage
> > > domain and it the case the hosted-engine would become the master
> > > storage domain and as you pointed out that was an issue.
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1298697
> > >
> > > Now it should be fixed. If it just happened again just because
> > you
> > > gluster regular storage domain wasn't available is not really
> > fixed.
> > > Adding Roy here.
> > > Dariusz, which release are you using?
> > 
> > Regarding to the ovirt version.
> > 1. ovirt manager
> > ovirt-engine-setup - oVirt Engine Version: 3.6.2.6-1.el7.centos
> The patch that should address that issue is here:
> https://gerrit.ovirt.org/#/c/53208/
> 
> But you'll find it only in 3.6.3; it wasn't available at 3.6.2.6
> time.
> 
> Recovering from the condition you reached is possible but it requires
> a few manual actions.
> If your instance was almost empty redeploying is also an (probably
> easier) option.
> 
>  
> >  # uname -a
> > Linux ovirtm.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> > 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> > cat /etc/redhat-release
> > CentOS Linux release 7.2.1511 (Core)
> > 
> > 
> > 
> > 2. hypervisor
> > 
> > uname -a
> > Linux ovirth1.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> > 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> > cat /etc/redhat-release
> > CentOS Linux release 7.2.1511 (Core)
> > 
> > rpm -qa|grep 'hosted\|vdsm'
> > vdsm-cli-4.17.18-1.el7.noarch
> > ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch
> > vdsm-xmlrpc-4.17.18-1.el7.noarch
> > vdsm-jsonrpc-4.17.18-1.el7.noarch
> > vdsm-4.17.18-1.el7.noarch
> > vdsm-python-4.17.18-1.el7.noarch
> > vdsm-yajsonrpc-4.17.18-1.el7.noarch
> > vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
> > ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch
> > vdsm-infra-4.17.18-1.el7.noarch
> > 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Master Doamin - Ovirt 3.6 - hoted engine

2016-02-26 Thread Dariusz Kryszak
Thanks Simone,
I'll wait for official 3.6.3 release.
That's cool that it
has been fix patch released.
I'll appreciate for workaround procedure
and I think that other users too.
Do you have that procedure?

I've tried do this by myself, but finally I've redeployed whole environment.
I've changed records in the postgres database. (table storage_domain_static)
It is not enough. I've try to change metadata files on the storage but after 
that storage domains has been red :-(.



On Fri, 2016-02-26 at 09:17 +0100, Simone Tiraboschi wrote:
> 
> 
> On Thu, Feb 25, 2016 at 10:03 PM, Dariusz Kryszak <
> dariusz.krys...@gmail.com> wrote:
> > 
> > 
> > 
> > 
> > On Tue, 2016-02-23 at 17:13 +0100, Simone Tiraboschi wrote:
> > >
> > >
> > > On Tue, Feb 23, 2016 at 4:19 PM, Dariusz Kryszak <
> > > dariusz.krys...@gmail.com> wrote:
> > > > Hi folks,
> > > > I have a question about master domain when I'm using hosted
> > engine
> > > > deployment.
> > > > At the beginning I've made deployment on NUC (small home
> > > > installation) with hosted engine on the nfs share from NUC
> > host.
> > > > I've configured FS gluster on the same machine and used it for
> > > > master domain and iso domain. Lets say All-in-ONE.
> > > > After reboot happened something strange. Log says that master
> > > > domain is not available and has to become on the
> > hosted_storage.
> > > > This is not ok in my opinion. I know that behavior because
> > master
> > > > doamin is not available, has been migrated to other shareable
> > (in
> > > > this case hosted_domain is nfs ).
> > > > Do you thing, that should be locked in this particular case
> > means
> > > > when available is only hosted_storage? Right now it is not
> > possible
> > > > to change this situation because hosted engine resides on the
> > > > hosted_storage. I Can't migrate it.
> > > >
> > > >
> > > It could happen only after the hosted-engine storage domain got
> > > imported by the engine but to do that you need an additional
> > storage
> > > domain which will become the master storage domain.
> > > In the past we had a bug that let you remove the last regular
> > storage
> > > domain and it the case the hosted-engine would become the master
> > > storage domain and as you pointed out that was an issue.
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1298697
> > >
> > > Now it should be fixed. If it just happened again just because
> > you
> > > gluster regular storage domain wasn't available is not really
> > fixed.
> > > Adding Roy here.
> > > Dariusz, which release are you using?
> > 
> > Regarding to the ovirt version.
> > 1. ovirt manager
> > ovirt-engine-setup - oVirt Engine Version: 3.6.2.6-1.el7.centos
> The patch that should address that issue is here:
> https://gerrit.ovirt.org/#/c/53208/
> 
> But you'll find it only in 3.6.3; it wasn't available at 3.6.2.6
> time.
> 
> Recovering from the condition you reached is possible but it requires
> a few manual actions.
> If your instance was almost empty redeploying is also an (probably
> easier) option.
> 
>  
> >  # uname -a
> > Linux ovirtm.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> > 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> > cat /etc/redhat-release
> > CentOS Linux release 7.2.1511 (Core)
> > 
> > 
> > 
> > 2. hypervisor
> > 
> > uname -a
> > Linux ovirth1.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> > 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> > cat /etc/redhat-release
> > CentOS Linux release 7.2.1511 (Core)
> > 
> > rpm -qa|grep 'hosted\|vdsm'
> > vdsm-cli-4.17.18-1.el7.noarch
> > ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch
> > vdsm-xmlrpc-4.17.18-1.el7.noarch
> > vdsm-jsonrpc-4.17.18-1.el7.noarch
> > vdsm-4.17.18-1.el7.noarch
> > vdsm-python-4.17.18-1.el7.noarch
> > vdsm-yajsonrpc-4.17.18-1.el7.noarch
> > vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
> > ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch
> > vdsm-infra-4.17.18-1.el7.noarch
> > 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move from Gluster to NFS

2016-02-26 Thread Sandro Bonazzola
On Mon, Feb 15, 2016 at 4:46 PM, Yaniv Dary  wrote:

> Adding Sandro and Didi might be able to give more detailed flow to do this.
>

please add Simone, Martin and Roy too when it's HE related.



>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Sun, Feb 14, 2016 at 7:12 PM, Christophe TREFOIS <
> christophe.tref...@uni.lu> wrote:
>
>> Is there a reason why this is not possible?
>>
>>
>>
>> Can I setup a second host in the engine cluster and move the engine with
>> “storage” to that host?
>>
>>
>>
>> So, what you would recommend is:
>>
>>
>>
>> 1.   Move all VMs from engine host to another Host
>>
>> 2.   Setup NFS on the empty HE host
>>
>> 3.   Shutdown the HE, and disable HA-proxy and agent
>>
>> 4.   Re-deploy the engine and restore from backup the HE
>>
>> 5.   Enjoy ?
>>
>>
>>
>> Thank you for any help on this,
>>
>> I really don’t want to end up with broken environment J
>>
>
We're discussing this kind of migrations with storage guys. We haven't a
supported procedure to do this yet.
Above procedure should work except that if you're going to re-use the first
host you'll need to remove it from the engine before trying to attach it
again from he-setup.



>
>>
>> Kind regards,
>>
>>
>>
>> --
>>
>> Christophe
>>
>>
>>
>> *From:* Yaniv Dary [mailto:yd...@redhat.com]
>> *Sent:* dimanche 14 février 2016 16:32
>> *To:* Christophe TREFOIS 
>> *Cc:* users 
>> *Subject:* Re: [ovirt-users] Move from Gluster to NFS
>>
>>
>>
>> You will not be able to move it between storage that is way I suggested
>> the backup and restore path.
>>
>>
>> Yaniv Dary
>>
>> Technical Product Manager
>>
>> Red Hat Israel Ltd.
>>
>> 34 Jerusalem Road
>>
>> Building A, 4th floor
>>
>> Ra'anana, Israel 4350109
>>
>>
>>
>> Tel : +972 (9) 7692306
>>
>> 8272306
>>
>> Email: yd...@redhat.com
>>
>> IRC : ydary
>>
>>
>>
>> On Sun, Feb 14, 2016 at 5:28 PM, Christophe TREFOIS <
>> christophe.tref...@uni.lu> wrote:
>>
>> Hi Yaniv,
>>
>>
>>
>> Would you recommend doing a clean install or can I simply move the HE
>> from the gluster mount point to NFS and tell HA agent to boot from there?
>>
>>
>>
>> What do you think?
>>
>>
>>
>> Thank you,
>>
>>
>>
>> --
>>
>> Christophe
>>
>> Sent from my iPhone
>>
>>
>> On 14 Feb 2016, at 15:30, Yaniv Dary  wrote:
>>
>> We will probably need to backup and restore the HE vm after doing a clean
>> install on NFS.
>>
>>
>> Yaniv Dary
>>
>> Technical Product Manager
>>
>> Red Hat Israel Ltd.
>>
>> 34 Jerusalem Road
>>
>> Building A, 4th floor
>>
>> Ra'anana, Israel 4350109
>>
>>
>>
>> Tel : +972 (9) 7692306
>>
>> 8272306
>>
>> Email: yd...@redhat.com
>>
>> IRC : ydary
>>
>>
>>
>> On Sat, Feb 6, 2016 at 11:30 PM, Christophe TREFOIS <
>> christophe.tref...@uni.lu> wrote:
>>
>> Dear all,
>>
>>
>>
>> I currently have a self-hosted setup with gluster on 1 node.
>>
>> I do have other data centers with 3 other hosts and local (sharable) NFS
>> storage. Furthermore, I have 1 NFS export domain.
>>
>>
>>
>> We would like to move from Gluster to NFS only on the first host.
>>
>>
>>
>> Does anybody have any experience with this?
>>
>>
>>
>> Thank you,
>>
>>
>>
>> —
>>
>> Christophe
>>
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>>
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Master Doamin - Ovirt 3.6 - hoted engine

2016-02-26 Thread Simone Tiraboschi
On Thu, Feb 25, 2016 at 10:03 PM, Dariusz Kryszak  wrote:

>
>
>
>
> On Tue, 2016-02-23 at 17:13 +0100, Simone Tiraboschi wrote:
> >
> >
> > On Tue, Feb 23, 2016 at 4:19 PM, Dariusz Kryszak <
> > dariusz.krys...@gmail.com> wrote:
> > > Hi folks,
> > > I have a question about master domain when I'm using hosted engine
> > > deployment.
> > > At the beginning I've made deployment on NUC (small home
> > > installation) with hosted engine on the nfs share from NUC host.
> > > I've configured FS gluster on the same machine and used it for
> > > master domain and iso domain. Lets say All-in-ONE.
> > > After reboot happened something strange. Log says that master
> > > domain is not available and has to become on the hosted_storage.
> > > This is not ok in my opinion. I know that behavior because master
> > > doamin is not available, has been migrated to other shareable (in
> > > this case hosted_domain is nfs ).
> > > Do you thing, that should be locked in this particular case means
> > > when available is only hosted_storage? Right now it is not possible
> > > to change this situation because hosted engine resides on the
> > > hosted_storage. I Can't migrate it.
> > >
> > >
> > It could happen only after the hosted-engine storage domain got
> > imported by the engine but to do that you need an additional storage
> > domain which will become the master storage domain.
> > In the past we had a bug that let you remove the last regular storage
> > domain and it the case the hosted-engine would become the master
> > storage domain and as you pointed out that was an issue.
> > https://bugzilla.redhat.com/show_bug.cgi?id=1298697
> >
> > Now it should be fixed. If it just happened again just because you
> > gluster regular storage domain wasn't available is not really fixed.
> > Adding Roy here.
> > Dariusz, which release are you using?
>
> Regarding to the ovirt version.
> 1. ovirt manager
> ovirt-engine-setup - oVirt Engine Version: 3.6.2.6-1.el7.centos
>

The patch that should address that issue is here:
https://gerrit.ovirt.org/#/c/53208/

But you'll find it only in 3.6.3; it wasn't available at 3.6.2.6 time.

Recovering from the condition you reached is possible but it requires a few
manual actions.
If your instance was almost empty redeploying is also an (probably easier)
option.



> # uname -a
> Linux ovirtm.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> cat /etc/redhat-release
> CentOS Linux release 7.2.1511 (Core)
>
>
>
> 2. hypervisor
>
> uname -a
> Linux ovirth1.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
> 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> cat /etc/redhat-release
> CentOS Linux release 7.2.1511 (Core)
>
> rpm -qa|grep 'hosted\|vdsm'
> vdsm-cli-4.17.18-1.el7.noarch
> ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch
> vdsm-xmlrpc-4.17.18-1.el7.noarch
> vdsm-jsonrpc-4.17.18-1.el7.noarch
> vdsm-4.17.18-1.el7.noarch
> vdsm-python-4.17.18-1.el7.noarch
> vdsm-yajsonrpc-4.17.18-1.el7.noarch
> vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
> ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch
> vdsm-infra-4.17.18-1.el7.noarch
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users