Re: [ovirt-users] OpenStack Horizon provision to oVirt / front end

2016-06-02 Thread Barak Korren
בתאריך 3 ביוני 2016 02:46,‏ "Bill Bill"  כתב:
>
> Is it possible for OpenStack Horizon to be used as a front end for oVirt
to view, manage and/or provision VM’s?

The short answer - no.
You could, however, try to use nested virt to use oVirt VMs as OpenStack
hosts.

You can also use ManageIQ as a front-end for both.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Windows Guest Tools & Live Migration issues.

2016-06-02 Thread Anantha Raghava

Hi,

We have just installed oVirt 3.6 on 3 hosts with storage volumes on 
Fibre Channel storage box. Every thing is working fine except the following.


1. We have created 15 Virtual Machines all VM with Windows Server 2012 
R2 OS. VM properties does not report the Operating System nor it shows 
the IP and FQDN in the Admin Portal. There is always an exclamation mark 
that reports about OS being different from the template and timezone 
issues. We have changed the timezone to Indian Standard Time in both VM 
and Host, same result continues. We have installed Windows Gues Tools, 
same result continues. Screen shot is below.


VMs

2. When we manually tried to migrate the VMs from one host to another 
one, the migration gets initiated, but will eventually fail.


Any specific setting missing here or is it a bug.

Note:

All hosts are installed with CentOS 7.2 minimal installation oVirt node 
is installed and activated.

We do not have a DNS in our environment. We have to do with IPs.
We are yet to apply the 3.6.6 patch on Engine and Nodes.
We are running a stand alone engine, not a Hosted Engine.
--

Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Could not associate gluster brick with correct network warning

2016-06-02 Thread Ramesh Nachimuthu



On 05/30/2016 02:00 PM, Roderick Mooi wrote:

Hi

Yes, I created the volume using "gluster volume create ..." prior to 
installing ovirt. Something I noticed is that there is no "gluster" 
bridge on top of the interface I selected for the "Gluster Management" 
network - could this be the problem?




Ovirt is not able to associate the FQDN name "glustermount.host1" to any 
of the network interface in the host. This is not a major problem. 
Everything will work except brick management from oVirt. You won't be 
able to do any brick specific action using Ovirt.


Note: We are planing to remove the repeated warning message seen in the 
engine.log.



Regards,
Ramesh


Thanks,

Roderick

Roderick Mooi

Senior Engineer: South African National Research Network (SANReN)
Meraka Institute, CSIR

roder...@sanren.ac.za  | +27 12 841 4111 
| www.sanren.ac.za 


On Fri, May 27, 2016 at 11:35 AM, Ramesh Nachimuthu 
> wrote:


How did you create the volume?. Looks like the volume was created
using FQDN in Gluster CLI.


Regards,
Ramesh

- Original Message -
> From: "Roderick Mooi" >
> To: "users" >
> Sent: Friday, May 27, 2016 2:34:51 PM
> Subject: [ovirt-users] Could not associate gluster brick with
correct network warning
>
> Good day
>
> I've setup a "Gluster Management" network in DC, cluster and all
hosts. It is
> appearing as "operational" in the cluster and all host networks look
> correct. But I'm seeing this warning continually in the engine.log:
>
> 2016-05-27 08:56:58,988 WARN
>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-80) [] Could not associate brick
> 'glustermount.host1:/gluster/data/brick' of volume
> '7a25d2fb-1048-48d8-a26d-f288ff0e28cb' with correct network as
no gluster
> network found in cluster '0002-0002-0002-0002-02b8'
>
> This is on ovirt 3.6.5.
>
> Can anyone assist?
>
> Thanks,
>
> Roderick Mooi
>
> Senior Engineer: South African National Research Network (SANReN)
> Meraka Institute, CSIR
>
> roder...@sanren.ac.za  | +27 12
841 4111  | www.sanren.ac.za

>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Questions on oVirt

2016-06-02 Thread Brett I. Holcomb
After using oVirt for about three months I have some questions that
really haven't been answered in any of the documentation, posts, or
found in searching.  Or maybe more correctly I've found some answers
but am trying to put the pieces I've found together.

My setup is one physical host that used to run VMware ESXi6 and it
handled running the VMs on an iSCSI LUN on a Synology 3615xs unit.  I
have one physical Windows workstation and all the servers, DNS, DHCP,
file, etc. are VMs.  The VMs are on an iSCSI LUN on the Synology.

* Hosted-engine deployment - Run Engine as a VM.  This has the
advantage of using one machine for host and running the Engine as a VM
but what are the cons of it?

* Can I run the Engine on the host that will run the VMs without
running it on a VM?  That is I install the OS on my physical box,
install Engine, then setup datastores (iSCSI LUN), networking etc.

* How do I run more than one Engine.  With just one there is no
redundancy so can I run another Engine that access the same Datacenter,
etc. as the first?  Or does each Engine have to have it's own
Datacenter and the backup is achieved by migrating between the Engine's
Datacenters as needed.

* Given I have a hosted Engine setup how can I "undo" it and  get to
running just the Engine on the host.  Do I have to undo everything or
can I just install another instance of the Engine on the host but not
in a VM, move the VMs to it and then remove the Engine VM.

* System shutdown - If I shutdown the host what is the proper
procedure?  Go to global maintenance mode and then shutdown the host or
do I have to do some other steps to make sure VMs don't get corrupted.
 On ESXi we'd put a host into maintenance mode after shutting down or
moving the VMs so I assume it's the same here. Shutdown VMS since there
is nowhere to move the VMS, go into global maintenance mode. Shutdown.
 On startup the  Engine will come up, then I start my V
Ms.##SELECTION_END##

* Upgrading engine and host - Do I have to go to maintenance mode then
run yum to install the new versions on the host and engine and then run
engine-setup or do I need to go into maintenance mode?  I assume the
4.0 production install will be much more involved but hopefully keeping
updated will make it a little less painful.

Thanks.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OpenStack Horizon provision to oVirt / front end

2016-06-02 Thread Bill Bill
Is it possible for OpenStack Horizon to be used as a front end for oVirt to 
view, manage and/or provision VM’s?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to automate the ovirt host deployment?

2016-06-02 Thread Arman Khalatyan
After few days playing with ovirt+foreman I can now deploy bare metal with
foreman and attach it to ovirt.
Over all it works, but there are some points to mention to beginners like
me:

a) To make a auto discovery ,ovirt and deployment working together one
should enable in "foreman-installer -i" following modules:
1. [\u2713] Configure foreman
2. [\u2713] Configure foreman_cli
3. [\u2713] Configure foreman_proxy
4. [\u2713] Configure puppet
6. [\u2713] Configure foreman_plugin_bootdisk
10. [\u2713] Configure foreman_plugin_dhcp_browser
12. [\u2713] Configure foreman_plugin_discovery
21. [\u2713] Configure foreman_plugin_setup
23. [\u2713] Configure foreman_plugin_templates
28. [\u2713] Configure foreman_compute_ovirt
33. [\u2713] Configure foreman_proxy_plugin_discovery

b) next follow the documentation:
http://www.ovirt.org/develop/release-management/features/foreman/foremanintegration/

You should be able to see the hosts from ovirt interface.
I was not able to add an auto-discovered hosts to ovirt it always trows an
exception: Failed to add Host  (User: admin@internal). probably it
is a bug.

In order to add the hosts: first I provisioned the auto-discovered hosts
with foreman to centos7 then over the gui in . Important here to add into
installation template the path to ovirt repository:

yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm

If your provisioned host does not know the repository you cannot add the
foreman host from the ovirt gui.




***

 Dr. Arman Khalatyan  eScience -SuperComputing
 Leibniz-Institut für Astrophysik Potsdam (AIP)
 An der Sternwarte 16, 14482 Potsdam, Germany

***

On Tue, May 31, 2016 at 8:03 PM, Arman Khalatyan  wrote:

> Nice! Finally no sad face anymore:)
> I am testing centos 7.2, with foreman 1.11
>
> Testing  now with unattended installations on multiple vms. Works like a
> charm:)
> Later will try on baremetals.
> I need to learn how to write templates.
>
>
> Den 30 maj 2016 22:22 skrev Arman Khalatyan :
> >
> > Sorry for the previous empty email.
> >
> > I was testing foreman plugins for ovirt deploy. They are some how
> broken. The foreman-install --enable-ovirt-provisioning-plugin breaks the
> foreman installation. I need to dig deeper:(
>
> Don't know what distribution you're using but setting all up manually
> showed me that Foreman needs to be at at least 11 for the plugin to work.
> Otherwise it behaved in the same way for me; all fine and well until the
> provision plugin was installed and then *sadface* :)
>
> Get Foreman up to version 11 and you'll be fine, is my guess.
>
> /K
> >
> > Am 28.05.2016 4:07 nachm. schrieb "Yaniv Kaul" :
> >>
> >> >
> >
> > >
> > >
> > > On Sat, May 28, 2016 at 12:50 PM, Arman Khalatyan 
> wrote:
> >>
> >> >>
> >
> > >> Thank you for the hint. I will try next week.
> > >> Foreman looks quite complex:)
> > >
> > >
> > > I think this is an excellent suggestion - Foreman, while may take a
> while to set up, will also be extremely useful to provision and manage not
> only hosts, but VMs later on!
> > >
> >>
> >> >> I would prefer simple Python script with 4 lines: add, install,
> setup networks and activate.
> >
> >
> > >
> > >
> > > You can look at ovirt-system-tests , the testing suite for oVirt, on
> Python code for the above.
> > > Y.
> > >
> >>
> >>
> > >>
> > >> Am 27.05.2016 6:51 nachm. schrieb "Karli Sjöberg" <
> karli.sjob...@slu.se>:
> >>
> >> >>>
> >
> > >>>
> > >>> Den 27 maj 2016 18:41 skrev Arman Khalatyan :
> > >>> >
> > >>> > Hi, I am looking some method to automate the host deployments in a
> cluster environment.
> > >>> > Assuming we have 20 nodes with centos 7 eth0/eth1 configured. Is
> it possible to automate installation with ovirt-sdk?
> > >>> > Are there some examples  ?
> > >>>
> > >>> You could do that, or look into full life cycle management with The
> Foreman.
> > >>>
> > >>> /K
> > >>>
> > >>> >
> > >>> > Thanks,
> > >>> > Arman.
> > >>
> > >>
> > >> ___
> > >> Users mailing list
> > >> Users@ovirt.org
> > >> http://lists.ovirt.org/mailman/listinfo/users
> > >>
> > >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-02 Thread David Teigland
On Thu, Jun 02, 2016 at 06:47:37PM +0300, Nir Soffer wrote:
> > This is a mess that's been caused by improper use of storage, and various
> > sanity checks in sanlock have all reported errors for "impossible"
> > conditions indicating that something catastrophic has been done to the
> > storage it's using.  Some fundamental rules are not being followed.
> 
> Thanks David.
> 
> Do you need more output from sanlock to understand this issue?

I can think of nothing more to learn from sanlock.  I'd suggest tighter,
higher level checking or control of storage.  Low level sanity checks
detecting lease corruption are not a convenient place to work from.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-02 Thread Nir Soffer
On Thu, Jun 2, 2016 at 6:35 PM, David Teigland  wrote:
>> verify_leader 2 wrong space name
>> 4643f652-8014-4951-8a1a-02af41e67d08
>> f757b127-a951-4fa9-bf90-81180c0702e6
>> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
>
>> leader1 delta_acquire_begin error -226 lockspace
>> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
>
> VDSM has tried to join VG/lockspace/storage-domain "f757b127" on LV
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids.  But sanlock finds that
> lockspace "4643f652" is initialized on that storage, i.e. inconsistency
> between the leases formatted on disk and what the leases are being used
> for.  That should never happen unless sanlock and/or storage are
> used/moved/copied wrongly.  The error is a sanlock sanity check to catch
> misuse.
>
>
>> s1527 check_other_lease invalid for host 0 0 ts 7566376 name  in
>> 4643f652-8014-4951-8a1a-02af41e67d08
>
>> s1527 check_other_lease leader 12212010 owner 1 11 ts 7566376
>> sn f757b127-a951-4fa9-bf90-81180c0702e6 rn 
>> f888524b-27aa-4724-8bae-051f9e950a21.vm1.intern
>
> Apparently sanlock is already managing a lockspace called "4643f652" when
> it finds another lease in that lockspace has the inconsistent/corrupt name
> "f757b127".  I can't say what steps might have been done to lead to this.
>
> This is a mess that's been caused by improper use of storage, and various
> sanity checks in sanlock have all reported errors for "impossible"
> conditions indicating that something catastrophic has been done to the
> storage it's using.  Some fundamental rules are not being followed.

Thanks David.

Do you need more output from sanlock to understand this issue?

Juergen, can you open ovirt bug, and include sanlock and vdsm logs from the time
this error started?

Thanks,
Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-02 Thread David Teigland
> verify_leader 2 wrong space name
> 4643f652-8014-4951-8a1a-02af41e67d08
> f757b127-a951-4fa9-bf90-81180c0702e6
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids

> leader1 delta_acquire_begin error -226 lockspace
> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2

VDSM has tried to join VG/lockspace/storage-domain "f757b127" on LV
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids.  But sanlock finds that
lockspace "4643f652" is initialized on that storage, i.e. inconsistency
between the leases formatted on disk and what the leases are being used
for.  That should never happen unless sanlock and/or storage are
used/moved/copied wrongly.  The error is a sanlock sanity check to catch
misuse.


> s1527 check_other_lease invalid for host 0 0 ts 7566376 name  in
> 4643f652-8014-4951-8a1a-02af41e67d08

> s1527 check_other_lease leader 12212010 owner 1 11 ts 7566376
> sn f757b127-a951-4fa9-bf90-81180c0702e6 rn 
> f888524b-27aa-4724-8bae-051f9e950a21.vm1.intern

Apparently sanlock is already managing a lockspace called "4643f652" when
it finds another lease in that lockspace has the inconsistent/corrupt name
"f757b127".  I can't say what steps might have been done to lead to this.

This is a mess that's been caused by improper use of storage, and various
sanity checks in sanlock have all reported errors for "impossible"
conditions indicating that something catastrophic has been done to the
storage it's using.  Some fundamental rules are not being followed.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving Hosted Engine NFS storage domain

2016-06-02 Thread Beard Lionel (BOSTON-STORAGE)
Hi,

I have tried these steps :

-  Stop Hosted VM

-  # vdsClient -s localhost forcedDetachStorageDomain 

-  Domain is now detached

-  # hosted-storage –clean-metadata

-  # hosted-storage –vm-start

But, hosted domain path is still the old one.
If I run :
# vdsClient -s localhost getStorageDomainsList 
The path is correct !!

So I don’t know where the wrong path is stored.

I think the only way is to reinstall Hosted VM from scratch.

@ Staniforth Paul, your procedure is not working ☹

Regards,
Lionel BEARD

De : Beard Lionel (BOSTON-STORAGE)
Envoyé : mercredi 1 juin 2016 22:26
À : 'Roy Golan' 
Cc : Roman Mohr ; users 
Objet : RE: [ovirt-users] Moving Hosted Engine NFS storage domain

Hi,

Path is neither shared not mounted anymore on previous NFS server, but storage 
domain is still up and cannot be removed…

Is there a possibility to remove it from command line ?

Regards,
Lionel BEARD

De : Roy Golan [mailto:rgo...@redhat.com]
Envoyé : mercredi 1 juin 2016 20:57
À : Beard Lionel (BOSTON-STORAGE) >
Cc : Roman Mohr >; users 
>
Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain


On Jun 1, 2016 7:19 PM, "Beard Lionel (BOSTON-STORAGE)" 
> wrote:
>
> Hi,
>
> I am not able to do that, "Remove" button is greyed.
> And it is not possible to place it into maintenance mode because hosted VM is 
> running on it...
>
> Any clue?
>

You must create a situation where vdsm would fail to monitor that domain. I.e 
stop sharing that path or block it and then the status will allow you to force 
remove

> Thanks.
>
> Regards,
> Lionel BEARD
>
> > -Message d'origine-
> > De : Roman Mohr [mailto:rm...@redhat.com]
> > Envoyé : mercredi 1 juin 2016 14:43
> > À : Beard Lionel (BOSTON-STORAGE) >
> > Cc : Staniforth, Paul 
> > >; 
> > users@ovirt.org
> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
> >
> > On Wed, Jun 1, 2016 at 2:40 PM, Beard Lionel (BOSTON-STORAGE)
> > > wrote:
> > > Hi,
> > >
> > >
> > >
> > > I have followed these steps :
> > >
> > >
> > >
> > > -  Stop supervdsmd + vdsmd + ovirt-ha-agent + ovirt-ha-broker
> > >
> > > -  Modify config file
> > >
> > > -  Copy files (cp better handles sparse files than rsync)
> > >
> > > -  Umount old hosted-engine path
> > >
> > > -  Restart services
> > >
> > > -  Hosted VM doesn’t start => hosted-engine –clean-metadata. I get
> > > an error at the end, but now I am able to start Hosted VM :
> > >
> > > o
> > ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Metad
> > ata
> > > for current host missing.
> > >
> > >
> > >
> > > I can connect to oVirt interface, everything seems to be working fine,
> > > but the Hosted storage domain has an incorrect path, it is still
> > > pointing to old one… I think this information is not correctly
> > > reported by web interface, because this path doesn’t exist anymore, and
> > hosted VM is working !
> > >
> > > Does anyone knows how to fix that ?
> >
> > You have to do a "force remove" in the UI (without clicking the destroy
> > checkbox) of that storage. Then it should be reimported automatically.
> >
> > >
> > >
> > >
> > > Regards,
> > >
> > > Lionel BEARD
> > >
> > >
> > >
> > > De : Beard Lionel (BOSTON-STORAGE)
> > > Envoyé : mercredi 1 juin 2016 10:37
> > > À : 'Staniforth, Paul' 
> > > >;
> > > users@ovirt.org Objet : RE: Moving Hosted Engine 
> > > NFS storage domain
> > >
> > >
> > >
> > > Hi,
> > >
> > >
> > >
> > > I’m trying to move Hosted storage from one NFS server to another.
> > >
> > > As this is not a production environment, so I gave a try with no
> > > success, with a plan similar to yours.
> > >
> > >
> > >
> > > But I don’t like to stay on a failure, so I will give a second chance
> > > by following your plan J
> > >
> > >
> > >
> > > Regards,
> > >
> > > Lionel BEARD
> > >
> > >
> > >
> > > De : users-boun...@ovirt.org 
> > > [mailto:users-boun...@ovirt.org] De la
> > > part de Staniforth, Paul Envoyé : mardi 31 mai 2016 13:33 À :
> > > users@ovirt.org Objet : [ovirt-users] Moving 
> > > Hosted Engine NFS storage
> > > domain
> > >
> > >
> > >
> > > Hello,
> > >
> > >  we would like to move our NFS storage used for the HostedEngine.
> > >
> > >
> > >
> > > Plan would be
> > >
> > > enable global maintenance
> > > shut-down HostedEngine VM
> > > edit  

Re: [ovirt-users] Gluster + ovirt + resize2fs

2016-06-02 Thread Matt Wells
Thanks Sahina; an item I should have added as well.

On Wed, Jun 1, 2016 at 10:58 PM Sahina Bose  wrote:

> [+gluster-users]
>
>
> On 06/01/2016 11:30 PM, Matt Wells wrote:
>
> Apologies, it's XFS so would be an xfs_growfs
>
> On Wed, Jun 1, 2016 at 10:58 AM, Matt Wells 
> wrote:
>
>> Hi everyone, I had a quick question that I really needed to bounce off
>> someone; one of those measure twice cut once moments.
>>
>> My primary datastore is on a gluster volume and the short story is I'm
>> going to grow it.  I've thought of two options
>>
>> 1 - add a brick with the new space
>> ** Was wondering from the gluster point of view if anyone had a best
>> practice for this.  I've looked around and find many people explaining
>> their stories but not a definitive best practices.
>>
>>
>> 2 - as I'm sitting atop LVMs grow the LVM.
>> ** This is the one that makes me a little nervous.  I've done many
>> resize2fs and never had issues, but I've never had gluster running atop
>> that volume and my VM's atop that.  Has anyone had any experiences they
>> could share?
>>
>> Thanks all -
>> Wells
>>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to come back to the list of available vms in serial console

2016-06-02 Thread Michael Heller


Sure, just exit with ssh escape sequence:

~.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What network test validates a host?

2016-06-02 Thread Nicolas Ecarnot

Thank you Edward and Nir for your answers.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.6.7 Second Release Candidate is now available for testing

2016-06-02 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 3.6.7 for testing, as of June 2nd, 2016

This release is available now for:
* Fedora 22
* Red Hat Enterprise Linux 6.7 or later
* CentOS Linux (or similar) 6.7 or later

* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 22

This release candidate includes the following updated packages:

* ovirt-engine
* ovirt-engine-sdk-java
* ovirt-engine-sdk-python
* ovirt-hosted-engine-setup
* ovirt-hosted-engine-ha
* vdsm

See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO will be available soon [2].
* A new oVirt Node Next Install ISO will be available soon[3]
* Mirrors[4] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 3.6.7 release highlights:
http://www.ovirt.org/release/3.6.7/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/3.6.7/
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/ovirt-live/
[3]
http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/ovirt-node-ng-installer/
[4] http://www.ovirt.org/Repository_mirrors#Current_mirrors

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] What network test validates a host?

2016-06-02 Thread Edward Haas
On Wed, Jun 1, 2016 at 2:27 PM, Nicolas Ecarnot  wrote:

> Hello,
>
> Last week, one of our DC went through a network crash, and surprisingly,
> most of our hosts did resist.
> Some of them lost there connectivity, and were stonithed.
>
> I'd like to be sure to understand what tests are made to declare a host
> valid :
>
> - On the storage part, I guess EVERY[1] host is doing a read+write test
> (using "dd") towards the storage domain(s), every... say 5 seconds (?)
> In case of failure, I guess a countdown is triggered until this host is
> shot.
>
> But the network failure we faced was not on the dedicated storage network,
> but purely on the "LAN" network (5 virtual networks).
>
> - What kind of test is done on each host to declare the connectivity is OK
> on every virtual network?
> I ask that because oVirt has no knowledge of any gateway it could ping,
> and in some cases, some virtual networks don't even have a gateway.
> Is it a ping towards the SPM?
> Towards the engine?
> Is it a ping?
>
> I ask that because I found out that some host restarted nicely, ran some
> VMs, which had their NICs OK, but inside those guests, we find evidences
> that they were not able to communicate with very simple networks usually
> provided but the host.
> So I'm trying to figure out if a host could come back to life, but
> partially sound.
>
> [1] Thus, I don't clearly see the benefit of the SPM concept...
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

Hello Nicolas,

In general, oVirt Engine checks frequently the host state by asking it to
send a stats report.
As part of that report, nic state is reported.
Engine will move the host to non-operational in case a 'required' network
nic link is down, or if it cannot reach the host through the management
network.

One can also use a VDSM hook to check against a reference IP for
connectivity and fake the nic state.

In case storage domain connectivity fails (attempts to read fails), it will
report back to engine through the stats report and Engine will move the
host to non-operational after a few minutes.

Thanks,
Edy.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-02 Thread Nir Soffer
 On Mon, May 30, 2016 at 11:06 AM, InterNetX - Juergen Gotteswinter
 wrote:
> Hi,
>
> since some time we get Error Messages from Sanlock, and so far i was not
> able to figure out what exactly they try to tell and more important if
> its something which can be ignored or needs to be fixed (and how).

Sanlock errors messages are somewhat cryptic, hopefully David can explain them.

>
> Here are the Versions we are using currently:
>
> Engine
>
> ovirt-engine-3.5.6.2-1.el6.noarch
>
> Nodes
>
> vdsm-4.16.34-0.el7.centos.x86_64
> sanlock-3.2.4-1.el7.x86_64
> libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-1.2.17-13.el7_2.3.x86_64
> libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
> libvirt-1.2.17-13.el7_2.3.x86_64
>
> -- snip --
> May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
> [60137]: verify_leader 2 wrong space name
> 4643f652-8014-4951-8a1a-02af41e67d08
> f757b127-a951-4fa9-bf90-81180c0702e6
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
> May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
> [60137]: leader1 delta_acquire_begin error -226 lockspace
> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
> May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
> [60137]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
> May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
> [60137]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
> May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
> [60137]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
> 1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
> May 30 09:55:28 vm2 sanlock[1094]: 2016-05-30 09:55:28+0200 294110
> [1099]: s9703 add_lockspace fail result -226
> May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
> [60331]: verify_leader 2 wrong space name
> 4643f652-8014-4951-8a1a-02af41e67d08
> f757b127-a951-4fa9-bf90-81180c0702e6
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
> May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
> [60331]: leader1 delta_acquire_begin error -226 lockspace
> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
> May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
> [60331]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
> May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
> [60331]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
> May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
> [60331]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
> 1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
> May 30 09:55:59 vm2 sanlock[1094]: 2016-05-30 09:55:59+0200 294141
> [1098]: s9704 add_lockspace fail result -226
> May 30 09:56:05 vm2 sanlock[1094]: 2016-05-30 09:56:05+0200 294148
> [1094]: s1527 check_other_lease invalid for host 0 0 ts 7566376 name  in
> 4643f652-8014-4951-8a1a-02af41e67d08
> May 30 09:56:05 vm2 sanlock[1094]: 2016-05-30 09:56:05+0200 294148
> [1094]: s1527 check_other_lease leader 12212010 owner 1 11 ts 7566376 sn
> f757b127-a951-4fa9-bf90-81180c0702e6 rn
> f888524b-27aa-4724-8bae-051f9e950a21.vm1.intern
> May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
> [60496]: verify_leader 2 wrong space name
> 4643f652-8014-4951-8a1a-02af41e67d08
> f757b127-a951-4fa9-bf90-81180c0702e6
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
> May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
> [60496]: leader1 delta_acquire_begin error -226 lockspace
> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
> May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
> [60496]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
> May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
> [60496]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
> May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
> [60496]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
> 1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
> May 30 09:56:29 vm2 sanlock[1094]: 2016-05-30 09:56:29+0200 294171
> [6415]: s9705 add_lockspace fail result -226
> May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
> [60645]: verify_leader 2 wrong space name
> 4643f652-8014-4951-8a1a-02af41e67d08
> f757b127-a951-4fa9-bf90-81180c0702e6
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
> May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
> [60645]: leader1 delta_acquire_begin error -226 lockspace
> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
> May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
> [60645]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
> May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
> [60645]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
> May 30 09:56:58 vm2 sanlock[1094]: