Re: [ovirt-users] Questions on oVirt

2016-06-02 Thread jvandewege
On 3-6-2016 3:23, Brett I. Holcomb wrote:
> After using oVirt for about three months I have some questions that
> really haven't been answered in any of the documentation, posts, or
> found in searching.  Or maybe more correctly I've found some answers
> but am trying to put the pieces I've found together.
>
> My setup is one physical host that used to run VMware ESXi6 and it
> handled running the VMs on an iSCSI LUN on a Synology 3615xs unit.  I
> have one physical Windows workstation and all the servers, DNS, DHCP,
> file, etc. are VMs.  The VMs are on an iSCSI LUN on the Synology.
>
> * Hosted-engine deployment - Run Engine as a VM.  This has the
> advantage of using one machine for host and running the Engine as a VM
> but what are the cons of it?
Not many but I can think of one: if there is a problem with the storage
where the engine VM is running then it can be a challenge to get things
working again. You can guard against that by not using your host as your
main testing workstation :-)

>
> * Can I run the Engine on the host that will run the VMs without
> running it on a VM?  That is I install the OS on my physical box,
> install Engine, then setup datastores (iSCSI LUN), networking etc.
That used to be possible up to 3.5 and is called all-in-one setup.

>
> * How do I run more than one Engine.  With just one there is no
> redundancy so can I run another Engine that access the same
> Datacenter, etc. as the first?  Or does each Engine have to have it's
> own Datacenter and the backup is achieved by migrating between the
> Engine's Datacenters as needed.
There is just one Engine and normally you would have more hosts and it
would migrate around those hosts using the shared storage if you need to
do maintenance on those hosts.

>
> * Given I have a hosted Engine setup how can I "undo" it and  get to
> running just the Engine on the host.  Do I have to undo everything or
> can I just install another instance of the Engine on the host but not
> in a VM, move the VMs to it and then remove the Engine VM.
>
Get a second physical box, install an OS, install Engine on it and
restore the db backup on it and this should work. AIO setup isn't
possible in 3.6 onwards.

> * System shutdown - If I shutdown the host what is the proper
> procedure?  Go to global maintenance mode and then shutdown the host
> or do I have to do some other steps to make sure VMs don't get
> corrupted.  On ESXi we'd put a host into maintenance mode after
> shutting down or moving the VMs so I assume it's the same here.
> Shutdown VMS since there is nowhere to move the VMS, go into global
> maintenance mode. Shutdown.  On startup the  Engine will come up, then
> I start my VMs.
-  Shutdown any VMs that are running
- stop ovirt-ha-agent and ovirt-ha-broker (they keep the Engine up)
- stop the Engine
- stop vdsmd
- stop sanlock
- unmount the shared storage
- shutdown host.
The Engine will come up once you powerup the host. If you use
hosted-engine --set-maintenance --mode=global then ha-agent/ha-broker
won't start the Engine for you. You have to use hosted-engine
--mode=none first. You could add that to your system startup if you
prefer this sequence.
The above is my own recipe and works for me YMMV. (got it scripted and
can post it but it does more or less what I wrote)

>
> * Upgrading engine and host - Do I have to go to maintenance mode then
> run yum to install the new versions on the host and engine and then
> run engine-setup or do I need to go into maintenance mode?  I assume
> the 4.0 production install will be much more involved but hopefully
> keeping updated will make it a little less painful.
>
Its on the wiki somewhere but I think the order is:
- enable global maintenance on the host
- upgrade the engine by running engine-setup and it will tell you
whether it needs a yum upgrade engine-setup or it will do the oVirt
engine upgrade straightaway
- upgrade the rest of the engine packages and restart
- while the engine is down, upgrade the host and restart if needed
-  disable global maintenance on the host and if all is well Engine will
be restarted.

While hosted-engine seems complicated I haven't had any major issues
with it but neither have my other oVirt deployments, standalone Engine
or AIO.

Hope this answers some of your questions,

Joop


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Questions on oVirt

2016-06-02 Thread Charles Tassell

Hi Brett,

  I'm not an expert on oVirt, but from my experience I would say you 
probably want to run the engine as a VM rather than on the bare metal.  
It has a lot of moving parts (PostgresSQL, jBoss, etc...) and they all 
fit well inside the VM.  You can run it right on the bare-metal if you 
want though, as that was the preferred means for versions prior to 3.6  
Also, you don't need to allocate the recommended 16GB of RAM to it if 
you are only running 5-10 VMs.  You can probably get by with a 2-4GB VM 
which makes it more palatable.


  The thing to realize with oVirt is that the Engine is not the 
Hypervisor.  The engine is just a management tool.  If it crashes, all 
the VMs continue to run fine without it, so you can just start it back 
up and it will just resume managing everything fine.  If you only have 
one physical host you don't need to really worry too much about 
redundancy.  I don't think you can assign a host to two engines at the 
same time, but I might be wrong about that.


  If you want to migrate between a hosted engine and bare metal (or 
vice versa) you can use the engine-backup command to backup and then 
restore (same command, different arguments)  the configuration.  I've 
never done it, but it should work fine.


 For a system shutdown, I would shutdown all of the VMs (do the hosted 
engine last) and then just shutdown the box.  I'm not sure if 
maintenance mode is actually required or not, so I'd defer to someone 
with more experience.  I know I have done it this way and it doesn't 
seem to have caused any problems.


  For upgrades, I'd say shutdown all of the VMs (including the hosted 
engine) then apply your updates, reboot as necessary, and then start the 
VMs back up.  Once everything is up ssh into the hosted engine, update 
it (yum update), reboot as necessary, and you are good to go.  If you 
have a multi-host system that's a bit different.  In that case put a 
host into maintenance mode; migrate all the VMs to other hosts; update 
it and reboot it; set it as active; migrate the VMs back and move on to 
the next host doing the same thing.  the reason you want to shutdown all 
the VMs is that upgrades to the KVM/qemu packages may crash running VMs. 
I've seen this happen on Ubuntu, so I assume it's the same on RedHat/CentOS.


  As for the 4.0 branch, I'd give it a month or two of being out before 
you use it for a production system.  I started with oVirt just as 3.6 
came out and ran into some bugs that made it quite complicated.  On the 
positive side, I learned a lot about how it works from getting advice on 
how to deal with those issues. :)


On 2016-06-02 10:23 PM, users-requ...@ovirt.org wrote:

Message: 4
Date: Thu, 02 Jun 2016 21:23:49 -0400
From: "Brett I. Holcomb" 
To: users 
Subject: [ovirt-users] Questions on oVirt
Message-ID: <1464917029.26446.133.ca...@l1049h.com>
Content-Type: text/plain; charset="utf-8"

After using oVirt for about three months I have some questions that
really haven't been answered in any of the documentation, posts, or
found in searching. ?Or maybe more correctly I've found some answers
but am trying to put the pieces I've found together.

My setup is one physical host that used to run VMware ESXi6 and it
handled running the VMs on an iSCSI LUN on a Synology 3615xs unit. ?I
have one physical Windows workstation and all the servers, DNS, DHCP,
file, etc. are VMs. ?The VMs are on an iSCSI LUN on the Synology.

* Hosted-engine deployment - Run Engine as a VM. ?This has the
advantage of using one machine for host and running the Engine as a VM
but what are the cons of it?

* Can I run the Engine on the host that will run the VMs without
running it on a VM? ?That is I install the OS on my physical box,
install Engine, then setup datastores (iSCSI LUN), networking etc.

* How do I run more than one Engine. ?With just one there is no
redundancy so can I run another Engine that access the same Datacenter,
etc. as the first? ?Or does each Engine have to have it's own
Datacenter and the backup is achieved by migrating between the Engine's
Datacenters as needed.

* Given I have a hosted Engine setup how can I "undo" it and ?get to
running just the Engine on the host. ?Do I have to undo everything or
can I just install another instance of the Engine on the host but not
in a VM, move the VMs to it and then remove the Engine VM.

* System shutdown - If I shutdown the host what is the proper
procedure? ?Go to global maintenance mode and then shutdown the host or
do I have to do some other steps to make sure VMs don't get corrupted.
?On ESXi we'd put a host into maintenance mode after shutting down or
moving the VMs so I assume it's the same here. Shutdown VMS since there
is nowhere to move the VMS, go into global maintenance mode. Shutdown.
?On startup the ?Engine will come up, then I start my V
Ms.##SELECTION_END##

* Upgrading engine and host - Do I have to go to maintenance mode then
run yum to install the new versions on the host and engine a

Re: [ovirt-users] OpenStack Horizon provision to oVirt / front end

2016-06-02 Thread Barak Korren
בתאריך 3 ביוני 2016 02:46,‏ "Bill Bill"  כתב:
>
> Is it possible for OpenStack Horizon to be used as a front end for oVirt
to view, manage and/or provision VM’s?

The short answer - no.
You could, however, try to use nested virt to use oVirt VMs as OpenStack
hosts.

You can also use ManageIQ as a front-end for both.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Windows Guest Tools & Live Migration issues.

2016-06-02 Thread Anantha Raghava

Hi,

We have just installed oVirt 3.6 on 3 hosts with storage volumes on 
Fibre Channel storage box. Every thing is working fine except the following.


1. We have created 15 Virtual Machines all VM with Windows Server 2012 
R2 OS. VM properties does not report the Operating System nor it shows 
the IP and FQDN in the Admin Portal. There is always an exclamation mark 
that reports about OS being different from the template and timezone 
issues. We have changed the timezone to Indian Standard Time in both VM 
and Host, same result continues. We have installed Windows Gues Tools, 
same result continues. Screen shot is below.


VMs

2. When we manually tried to migrate the VMs from one host to another 
one, the migration gets initiated, but will eventually fail.


Any specific setting missing here or is it a bug.

Note:

All hosts are installed with CentOS 7.2 minimal installation oVirt node 
is installed and activated.

We do not have a DNS in our environment. We have to do with IPs.
We are yet to apply the 3.6.6 patch on Engine and Nodes.
We are running a stand alone engine, not a Hosted Engine.
--

Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Could not associate gluster brick with correct network warning

2016-06-02 Thread Ramesh Nachimuthu



On 05/30/2016 02:00 PM, Roderick Mooi wrote:

Hi

Yes, I created the volume using "gluster volume create ..." prior to 
installing ovirt. Something I noticed is that there is no "gluster" 
bridge on top of the interface I selected for the "Gluster Management" 
network - could this be the problem?




Ovirt is not able to associate the FQDN name "glustermount.host1" to any 
of the network interface in the host. This is not a major problem. 
Everything will work except brick management from oVirt. You won't be 
able to do any brick specific action using Ovirt.


Note: We are planing to remove the repeated warning message seen in the 
engine.log.



Regards,
Ramesh


Thanks,

Roderick

Roderick Mooi

Senior Engineer: South African National Research Network (SANReN)
Meraka Institute, CSIR

roder...@sanren.ac.za  | +27 12 841 4111 
| www.sanren.ac.za 


On Fri, May 27, 2016 at 11:35 AM, Ramesh Nachimuthu 
mailto:rnach...@redhat.com>> wrote:


How did you create the volume?. Looks like the volume was created
using FQDN in Gluster CLI.


Regards,
Ramesh

- Original Message -
> From: "Roderick Mooi" mailto:roder...@sanren.ac.za>>
> To: "users" mailto:users@ovirt.org>>
> Sent: Friday, May 27, 2016 2:34:51 PM
> Subject: [ovirt-users] Could not associate gluster brick with
correct network warning
>
> Good day
>
> I've setup a "Gluster Management" network in DC, cluster and all
hosts. It is
> appearing as "operational" in the cluster and all host networks look
> correct. But I'm seeing this warning continually in the engine.log:
>
> 2016-05-27 08:56:58,988 WARN
>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-80) [] Could not associate brick
> 'glustermount.host1:/gluster/data/brick' of volume
> '7a25d2fb-1048-48d8-a26d-f288ff0e28cb' with correct network as
no gluster
> network found in cluster '0002-0002-0002-0002-02b8'
>
> This is on ovirt 3.6.5.
>
> Can anyone assist?
>
> Thanks,
>
> Roderick Mooi
>
> Senior Engineer: South African National Research Network (SANReN)
> Meraka Institute, CSIR
>
> roder...@sanren.ac.za  | +27 12
841 4111  | www.sanren.ac.za

>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Questions on oVirt

2016-06-02 Thread Brett I. Holcomb
After using oVirt for about three months I have some questions that
really haven't been answered in any of the documentation, posts, or
found in searching.  Or maybe more correctly I've found some answers
but am trying to put the pieces I've found together.

My setup is one physical host that used to run VMware ESXi6 and it
handled running the VMs on an iSCSI LUN on a Synology 3615xs unit.  I
have one physical Windows workstation and all the servers, DNS, DHCP,
file, etc. are VMs.  The VMs are on an iSCSI LUN on the Synology.

* Hosted-engine deployment - Run Engine as a VM.  This has the
advantage of using one machine for host and running the Engine as a VM
but what are the cons of it?

* Can I run the Engine on the host that will run the VMs without
running it on a VM?  That is I install the OS on my physical box,
install Engine, then setup datastores (iSCSI LUN), networking etc.

* How do I run more than one Engine.  With just one there is no
redundancy so can I run another Engine that access the same Datacenter,
etc. as the first?  Or does each Engine have to have it's own
Datacenter and the backup is achieved by migrating between the Engine's
Datacenters as needed.

* Given I have a hosted Engine setup how can I "undo" it and  get to
running just the Engine on the host.  Do I have to undo everything or
can I just install another instance of the Engine on the host but not
in a VM, move the VMs to it and then remove the Engine VM.

* System shutdown - If I shutdown the host what is the proper
procedure?  Go to global maintenance mode and then shutdown the host or
do I have to do some other steps to make sure VMs don't get corrupted.
 On ESXi we'd put a host into maintenance mode after shutting down or
moving the VMs so I assume it's the same here. Shutdown VMS since there
is nowhere to move the VMS, go into global maintenance mode. Shutdown.
 On startup the  Engine will come up, then I start my V
Ms.##SELECTION_END##

* Upgrading engine and host - Do I have to go to maintenance mode then
run yum to install the new versions on the host and engine and then run
engine-setup or do I need to go into maintenance mode?  I assume the
4.0 production install will be much more involved but hopefully keeping
updated will make it a little less painful.

Thanks.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OpenStack Horizon provision to oVirt / front end

2016-06-02 Thread Bill Bill
Is it possible for OpenStack Horizon to be used as a front end for oVirt to 
view, manage and/or provision VM’s?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to automate the ovirt host deployment?

2016-06-02 Thread Arman Khalatyan
After few days playing with ovirt+foreman I can now deploy bare metal with
foreman and attach it to ovirt.
Over all it works, but there are some points to mention to beginners like
me:

a) To make a auto discovery ,ovirt and deployment working together one
should enable in "foreman-installer -i" following modules:
1. [\u2713] Configure foreman
2. [\u2713] Configure foreman_cli
3. [\u2713] Configure foreman_proxy
4. [\u2713] Configure puppet
6. [\u2713] Configure foreman_plugin_bootdisk
10. [\u2713] Configure foreman_plugin_dhcp_browser
12. [\u2713] Configure foreman_plugin_discovery
21. [\u2713] Configure foreman_plugin_setup
23. [\u2713] Configure foreman_plugin_templates
28. [\u2713] Configure foreman_compute_ovirt
33. [\u2713] Configure foreman_proxy_plugin_discovery

b) next follow the documentation:
http://www.ovirt.org/develop/release-management/features/foreman/foremanintegration/

You should be able to see the hosts from ovirt interface.
I was not able to add an auto-discovered hosts to ovirt it always trows an
exception: Failed to add Host  (User: admin@internal). probably it
is a bug.

In order to add the hosts: first I provisioned the auto-discovered hosts
with foreman to centos7 then over the gui in . Important here to add into
installation template the path to ovirt repository:

yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm

If your provisioned host does not know the repository you cannot add the
foreman host from the ovirt gui.




***

 Dr. Arman Khalatyan  eScience -SuperComputing
 Leibniz-Institut für Astrophysik Potsdam (AIP)
 An der Sternwarte 16, 14482 Potsdam, Germany

***

On Tue, May 31, 2016 at 8:03 PM, Arman Khalatyan  wrote:

> Nice! Finally no sad face anymore:)
> I am testing centos 7.2, with foreman 1.11
>
> Testing  now with unattended installations on multiple vms. Works like a
> charm:)
> Later will try on baremetals.
> I need to learn how to write templates.
>
>
> Den 30 maj 2016 22:22 skrev Arman Khalatyan :
> >
> > Sorry for the previous empty email.
> >
> > I was testing foreman plugins for ovirt deploy. They are some how
> broken. The foreman-install --enable-ovirt-provisioning-plugin breaks the
> foreman installation. I need to dig deeper:(
>
> Don't know what distribution you're using but setting all up manually
> showed me that Foreman needs to be at at least 11 for the plugin to work.
> Otherwise it behaved in the same way for me; all fine and well until the
> provision plugin was installed and then *sadface* :)
>
> Get Foreman up to version 11 and you'll be fine, is my guess.
>
> /K
> >
> > Am 28.05.2016 4:07 nachm. schrieb "Yaniv Kaul" :
> >>
> >> >
> >
> > >
> > >
> > > On Sat, May 28, 2016 at 12:50 PM, Arman Khalatyan 
> wrote:
> >>
> >> >>
> >
> > >> Thank you for the hint. I will try next week.
> > >> Foreman looks quite complex:)
> > >
> > >
> > > I think this is an excellent suggestion - Foreman, while may take a
> while to set up, will also be extremely useful to provision and manage not
> only hosts, but VMs later on!
> > >
> >>
> >> >> I would prefer simple Python script with 4 lines: add, install,
> setup networks and activate.
> >
> >
> > >
> > >
> > > You can look at ovirt-system-tests , the testing suite for oVirt, on
> Python code for the above.
> > > Y.
> > >
> >>
> >>
> > >>
> > >> Am 27.05.2016 6:51 nachm. schrieb "Karli Sjöberg" <
> karli.sjob...@slu.se>:
> >>
> >> >>>
> >
> > >>>
> > >>> Den 27 maj 2016 18:41 skrev Arman Khalatyan :
> > >>> >
> > >>> > Hi, I am looking some method to automate the host deployments in a
> cluster environment.
> > >>> > Assuming we have 20 nodes with centos 7 eth0/eth1 configured. Is
> it possible to automate installation with ovirt-sdk?
> > >>> > Are there some examples  ?
> > >>>
> > >>> You could do that, or look into full life cycle management with The
> Foreman.
> > >>>
> > >>> /K
> > >>>
> > >>> >
> > >>> > Thanks,
> > >>> > Arman.
> > >>
> > >>
> > >> ___
> > >> Users mailing list
> > >> Users@ovirt.org
> > >> http://lists.ovirt.org/mailman/listinfo/users
> > >>
> > >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-02 Thread David Teigland
On Thu, Jun 02, 2016 at 06:47:37PM +0300, Nir Soffer wrote:
> > This is a mess that's been caused by improper use of storage, and various
> > sanity checks in sanlock have all reported errors for "impossible"
> > conditions indicating that something catastrophic has been done to the
> > storage it's using.  Some fundamental rules are not being followed.
> 
> Thanks David.
> 
> Do you need more output from sanlock to understand this issue?

I can think of nothing more to learn from sanlock.  I'd suggest tighter,
higher level checking or control of storage.  Low level sanity checks
detecting lease corruption are not a convenient place to work from.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-02 Thread Nir Soffer
On Thu, Jun 2, 2016 at 6:35 PM, David Teigland  wrote:
>> verify_leader 2 wrong space name
>> 4643f652-8014-4951-8a1a-02af41e67d08
>> f757b127-a951-4fa9-bf90-81180c0702e6
>> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
>
>> leader1 delta_acquire_begin error -226 lockspace
>> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
>
> VDSM has tried to join VG/lockspace/storage-domain "f757b127" on LV
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids.  But sanlock finds that
> lockspace "4643f652" is initialized on that storage, i.e. inconsistency
> between the leases formatted on disk and what the leases are being used
> for.  That should never happen unless sanlock and/or storage are
> used/moved/copied wrongly.  The error is a sanlock sanity check to catch
> misuse.
>
>
>> s1527 check_other_lease invalid for host 0 0 ts 7566376 name  in
>> 4643f652-8014-4951-8a1a-02af41e67d08
>
>> s1527 check_other_lease leader 12212010 owner 1 11 ts 7566376
>> sn f757b127-a951-4fa9-bf90-81180c0702e6 rn 
>> f888524b-27aa-4724-8bae-051f9e950a21.vm1.intern
>
> Apparently sanlock is already managing a lockspace called "4643f652" when
> it finds another lease in that lockspace has the inconsistent/corrupt name
> "f757b127".  I can't say what steps might have been done to lead to this.
>
> This is a mess that's been caused by improper use of storage, and various
> sanity checks in sanlock have all reported errors for "impossible"
> conditions indicating that something catastrophic has been done to the
> storage it's using.  Some fundamental rules are not being followed.

Thanks David.

Do you need more output from sanlock to understand this issue?

Juergen, can you open ovirt bug, and include sanlock and vdsm logs from the time
this error started?

Thanks,
Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-02 Thread David Teigland
> verify_leader 2 wrong space name
> 4643f652-8014-4951-8a1a-02af41e67d08
> f757b127-a951-4fa9-bf90-81180c0702e6
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids

> leader1 delta_acquire_begin error -226 lockspace
> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2

VDSM has tried to join VG/lockspace/storage-domain "f757b127" on LV
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids.  But sanlock finds that
lockspace "4643f652" is initialized on that storage, i.e. inconsistency
between the leases formatted on disk and what the leases are being used
for.  That should never happen unless sanlock and/or storage are
used/moved/copied wrongly.  The error is a sanlock sanity check to catch
misuse.


> s1527 check_other_lease invalid for host 0 0 ts 7566376 name  in
> 4643f652-8014-4951-8a1a-02af41e67d08

> s1527 check_other_lease leader 12212010 owner 1 11 ts 7566376
> sn f757b127-a951-4fa9-bf90-81180c0702e6 rn 
> f888524b-27aa-4724-8bae-051f9e950a21.vm1.intern

Apparently sanlock is already managing a lockspace called "4643f652" when
it finds another lease in that lockspace has the inconsistent/corrupt name
"f757b127".  I can't say what steps might have been done to lead to this.

This is a mess that's been caused by improper use of storage, and various
sanity checks in sanlock have all reported errors for "impossible"
conditions indicating that something catastrophic has been done to the
storage it's using.  Some fundamental rules are not being followed.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving Hosted Engine NFS storage domain

2016-06-02 Thread Beard Lionel (BOSTON-STORAGE)
Hi,

I have tried these steps :

-  Stop Hosted VM

-  # vdsClient -s localhost forcedDetachStorageDomain 

-  Domain is now detached

-  # hosted-storage –clean-metadata

-  # hosted-storage –vm-start

But, hosted domain path is still the old one.
If I run :
# vdsClient -s localhost getStorageDomainsList 
The path is correct !!

So I don’t know where the wrong path is stored.

I think the only way is to reinstall Hosted VM from scratch.

@ Staniforth Paul, your procedure is not working ☹

Regards,
Lionel BEARD

De : Beard Lionel (BOSTON-STORAGE)
Envoyé : mercredi 1 juin 2016 22:26
À : 'Roy Golan' 
Cc : Roman Mohr ; users 
Objet : RE: [ovirt-users] Moving Hosted Engine NFS storage domain

Hi,

Path is neither shared not mounted anymore on previous NFS server, but storage 
domain is still up and cannot be removed…

Is there a possibility to remove it from command line ?

Regards,
Lionel BEARD

De : Roy Golan [mailto:rgo...@redhat.com]
Envoyé : mercredi 1 juin 2016 20:57
À : Beard Lionel (BOSTON-STORAGE) mailto:lbe...@cls.fr>>
Cc : Roman Mohr mailto:rm...@redhat.com>>; users 
mailto:users@ovirt.org>>
Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain


On Jun 1, 2016 7:19 PM, "Beard Lionel (BOSTON-STORAGE)" 
mailto:lbe...@cls.fr>> wrote:
>
> Hi,
>
> I am not able to do that, "Remove" button is greyed.
> And it is not possible to place it into maintenance mode because hosted VM is 
> running on it...
>
> Any clue?
>

You must create a situation where vdsm would fail to monitor that domain. I.e 
stop sharing that path or block it and then the status will allow you to force 
remove

> Thanks.
>
> Regards,
> Lionel BEARD
>
> > -Message d'origine-
> > De : Roman Mohr [mailto:rm...@redhat.com]
> > Envoyé : mercredi 1 juin 2016 14:43
> > À : Beard Lionel (BOSTON-STORAGE) mailto:lbe...@cls.fr>>
> > Cc : Staniforth, Paul 
> > mailto:p.stanifo...@leedsbeckett.ac.uk>>; 
> > users@ovirt.org
> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
> >
> > On Wed, Jun 1, 2016 at 2:40 PM, Beard Lionel (BOSTON-STORAGE)
> > mailto:lbe...@cls.fr>> wrote:
> > > Hi,
> > >
> > >
> > >
> > > I have followed these steps :
> > >
> > >
> > >
> > > -  Stop supervdsmd + vdsmd + ovirt-ha-agent + ovirt-ha-broker
> > >
> > > -  Modify config file
> > >
> > > -  Copy files (cp better handles sparse files than rsync)
> > >
> > > -  Umount old hosted-engine path
> > >
> > > -  Restart services
> > >
> > > -  Hosted VM doesn’t start => hosted-engine –clean-metadata. I get
> > > an error at the end, but now I am able to start Hosted VM :
> > >
> > > o
> > ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Metad
> > ata
> > > for current host missing.
> > >
> > >
> > >
> > > I can connect to oVirt interface, everything seems to be working fine,
> > > but the Hosted storage domain has an incorrect path, it is still
> > > pointing to old one… I think this information is not correctly
> > > reported by web interface, because this path doesn’t exist anymore, and
> > hosted VM is working !
> > >
> > > Does anyone knows how to fix that ?
> >
> > You have to do a "force remove" in the UI (without clicking the destroy
> > checkbox) of that storage. Then it should be reimported automatically.
> >
> > >
> > >
> > >
> > > Regards,
> > >
> > > Lionel BEARD
> > >
> > >
> > >
> > > De : Beard Lionel (BOSTON-STORAGE)
> > > Envoyé : mercredi 1 juin 2016 10:37
> > > À : 'Staniforth, Paul' 
> > > mailto:p.stanifo...@leedsbeckett.ac.uk>>;
> > > users@ovirt.org Objet : RE: Moving Hosted Engine 
> > > NFS storage domain
> > >
> > >
> > >
> > > Hi,
> > >
> > >
> > >
> > > I’m trying to move Hosted storage from one NFS server to another.
> > >
> > > As this is not a production environment, so I gave a try with no
> > > success, with a plan similar to yours.
> > >
> > >
> > >
> > > But I don’t like to stay on a failure, so I will give a second chance
> > > by following your plan J
> > >
> > >
> > >
> > > Regards,
> > >
> > > Lionel BEARD
> > >
> > >
> > >
> > > De : users-boun...@ovirt.org 
> > > [mailto:users-boun...@ovirt.org] De la
> > > part de Staniforth, Paul Envoyé : mardi 31 mai 2016 13:33 À :
> > > users@ovirt.org Objet : [ovirt-users] Moving 
> > > Hosted Engine NFS storage
> > > domain
> > >
> > >
> > >
> > > Hello,
> > >
> > >  we would like to move our NFS storage used for the HostedEngine.
> > >
> > >
> > >
> > > Plan would be
> > >
> > > enable global maintenance
> > > shut-down HostedEngine VM
> > > edit  /etc/ovirt-hosted-engine/hosted-engine.conf on hosts
> > >
> > > storage=newnfs:/newnfsvolume
> > >
> > > copy storage domain from old to new nfs server start HostedEngine VM
> > > run engine-setup on HostedEngine VM disable global maintenance
> > >
> 

Re: [ovirt-users] Gluster + ovirt + resize2fs

2016-06-02 Thread Matt Wells
Thanks Sahina; an item I should have added as well.

On Wed, Jun 1, 2016 at 10:58 PM Sahina Bose  wrote:

> [+gluster-users]
>
>
> On 06/01/2016 11:30 PM, Matt Wells wrote:
>
> Apologies, it's XFS so would be an xfs_growfs
>
> On Wed, Jun 1, 2016 at 10:58 AM, Matt Wells 
> wrote:
>
>> Hi everyone, I had a quick question that I really needed to bounce off
>> someone; one of those measure twice cut once moments.
>>
>> My primary datastore is on a gluster volume and the short story is I'm
>> going to grow it.  I've thought of two options
>>
>> 1 - add a brick with the new space
>> ** Was wondering from the gluster point of view if anyone had a best
>> practice for this.  I've looked around and find many people explaining
>> their stories but not a definitive best practices.
>>
>>
>> 2 - as I'm sitting atop LVMs grow the LVM.
>> ** This is the one that makes me a little nervous.  I've done many
>> resize2fs and never had issues, but I've never had gluster running atop
>> that volume and my VM's atop that.  Has anyone had any experiences they
>> could share?
>>
>> Thanks all -
>> Wells
>>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to come back to the list of available vms in serial console

2016-06-02 Thread Michael Heller


Sure, just exit with ssh escape sequence:

~.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What network test validates a host?

2016-06-02 Thread Nicolas Ecarnot

Thank you Edward and Nir for your answers.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.6.7 Second Release Candidate is now available for testing

2016-06-02 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 3.6.7 for testing, as of June 2nd, 2016

This release is available now for:
* Fedora 22
* Red Hat Enterprise Linux 6.7 or later
* CentOS Linux (or similar) 6.7 or later

* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 22

This release candidate includes the following updated packages:

* ovirt-engine
* ovirt-engine-sdk-java
* ovirt-engine-sdk-python
* ovirt-hosted-engine-setup
* ovirt-hosted-engine-ha
* vdsm

See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO will be available soon [2].
* A new oVirt Node Next Install ISO will be available soon[3]
* Mirrors[4] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 3.6.7 release highlights:
http://www.ovirt.org/release/3.6.7/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/3.6.7/
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/ovirt-live/
[3]
http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/ovirt-node-ng-installer/
[4] http://www.ovirt.org/Repository_mirrors#Current_mirrors

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What network test validates a host?

2016-06-02 Thread Nir Soffer
On Wed, Jun 1, 2016 at 2:27 PM, Nicolas Ecarnot  wrote:
> Hello,
>
> Last week, one of our DC went through a network crash, and surprisingly,
> most of our hosts did resist.
> Some of them lost there connectivity, and were stonithed.
>
> I'd like to be sure to understand what tests are made to declare a host
> valid :
>
> - On the storage part, I guess EVERY[1] host is doing a read+write test
> (using "dd") towards the storage domain(s), every... say 5 seconds (?)

We do:

- every 10 seconds (irs:sd_health_check_delay)
  - read first block from the metadata volume
  - check if vg is partial (block storage)
  - perform statvfs call (file storage)
  - validate master domain mount

- every 5 minutes (irs:repo_stats_cache_refresh_timeout):
  - run vgck (block storage)

We do not check writes to the storage, I guess we should add this, or monitor
sanlock status, which does write to all storage domains every 20 seconds.

> In case of failure, I guess a countdown is triggered until this host is
> shot.

In case of failure, domain status is reported as invalid with an error code.
On the engine side, we start a 5 minutes timer (configurable). If the domain did
not recover from the invalid state before the timer expire, we consider the
domain as failing.

If the domain is failing only on one host, this host will become
non-operational.
If the domain is failing on all hosts it will be deactivated. I think
we also try
to recover the domain, but I don't know the details.

>
> But the network failure we faced was not on the dedicated storage network,
> but purely on the "LAN" network (5 virtual networks).
>
> - What kind of test is done on each host to declare the connectivity is OK
> on every virtual network?
> I ask that because oVirt has no knowledge of any gateway it could ping, and
> in some cases, some virtual networks don't even have a gateway.
> Is it a ping towards the SPM?

Engine checks the spm host status regularly, and if it fails it will
try to stop it and
start the spm on another host.

> Towards the engine?
> Is it a ping?
>
> I ask that because I found out that some host restarted nicely, ran some
> VMs, which had their NICs OK, but inside those guests, we find evidences
> that they were not able to communicate with very simple networks usually
> provided but the host.
> So I'm trying to figure out if a host could come back to life, but partially
> sound.
>
> [1] Thus, I don't clearly see the benefit of the SPM concept...

The spm is the only host that can do metadata operations on shared storage.
Without it, your data will be corrupted, so there is a benefit.
However there are
many issues with the spm, and we are working on removing it and master
domain, and replacing it with more fault tolerant, efficient and
easier to maintain
solution.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What network test validates a host?

2016-06-02 Thread Edward Haas
On Wed, Jun 1, 2016 at 2:27 PM, Nicolas Ecarnot  wrote:

> Hello,
>
> Last week, one of our DC went through a network crash, and surprisingly,
> most of our hosts did resist.
> Some of them lost there connectivity, and were stonithed.
>
> I'd like to be sure to understand what tests are made to declare a host
> valid :
>
> - On the storage part, I guess EVERY[1] host is doing a read+write test
> (using "dd") towards the storage domain(s), every... say 5 seconds (?)
> In case of failure, I guess a countdown is triggered until this host is
> shot.
>
> But the network failure we faced was not on the dedicated storage network,
> but purely on the "LAN" network (5 virtual networks).
>
> - What kind of test is done on each host to declare the connectivity is OK
> on every virtual network?
> I ask that because oVirt has no knowledge of any gateway it could ping,
> and in some cases, some virtual networks don't even have a gateway.
> Is it a ping towards the SPM?
> Towards the engine?
> Is it a ping?
>
> I ask that because I found out that some host restarted nicely, ran some
> VMs, which had their NICs OK, but inside those guests, we find evidences
> that they were not able to communicate with very simple networks usually
> provided but the host.
> So I'm trying to figure out if a host could come back to life, but
> partially sound.
>
> [1] Thus, I don't clearly see the benefit of the SPM concept...
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

Hello Nicolas,

In general, oVirt Engine checks frequently the host state by asking it to
send a stats report.
As part of that report, nic state is reported.
Engine will move the host to non-operational in case a 'required' network
nic link is down, or if it cannot reach the host through the management
network.

One can also use a VDSM hook to check against a reference IP for
connectivity and fake the nic state.

In case storage domain connectivity fails (attempts to read fails), it will
report back to engine through the stats report and Engine will move the
host to non-operational after a few minutes.

Thanks,
Edy.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users