[ovirt-users] Re: Oracle Virtualization Manager 4.5 anyone?

2024-04-17 Thread Alex Crow via Users

Hi all,

The one big question I have, is: is Oracle contributing back to oVirt or 
do they plan to do so?


It seems there are still some talented people working on both, so why 
not pool that talent?


Best

Alex
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VRNRPWOCGQFXPMXPGFLU4PO3U6SOWRTC/


[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-10-12 Thread Alex Crow via Users

Thanks Neal, your insight is very much appreciated.

A lot of people forget about SuSE, but it's got the same ethos that RH 
had a few years ago, and it's worth looking into.


I can imagine Fedora being a choice in CI/CD deployments but I'd be 
surprised is more old-fashioned companies would accept the risk. Going 
through a whole risk vs cost thing right now.


It does seem that OpenShift is a potent platform, and pulling out of OKD 
would be a horrible mess, so here's hoping that if oVirt dies that OKD 
is a viable platform for both containers and Virtualisation (ah, legacy 
workloads!). It's just a shame that oVirt no longer has the sponsorship, 
I've been relentlessly ragged on on Reddit for admitting to using it 
(mostly from Hyper-V fanbois), but it's far, far better than Hyper-V and 
much less restrictive than VMWare. Hyper-V had me pulling my remaining 
hair out almost every single day. Five clicks to get the MAC address of 
a VM? Dynamic MAC addresses not being cluster-wide and pinned to a VM, 
so they change after a migration? Yuck.


I will read that link now!

Best regards

Alex
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KB7ABPHIO63TDWQLLPEEJD6YB7QBYQ35/


[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-09-07 Thread Alex Crow via Users

All,

I'd rather base against either Rocky or Alma in the shorter term, or 
Ubuntu/Debian for a longer view. oVirt IMHO is a superior product to 
anything else if you know your way around it. Fantastic and informative 
GUI, a great set of APIs, and pretty solid in terms of storage support. 
I'm running it hyperconverged over ZFS+DRBD with Corosync and Pacecmaker 
in a 2 node cluster.


I'm currently running a cluster on Rocky 8 and it's working perfectly at 
the moment. I'm not a fan of Gluster for its small I/O performance but 
I'm sure it's still a useful option for running VMs with heavy storage 
requirements with lesser small I/O performance requirements.


Fedora would get you fired from a lot of SME or enterprise environments. 
And with the mess around IBM/RH who knows if they'll drop OpenShift next 
and pull the devs out of OKD?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FVSMQSPYXWJC3EO4IRM72HLPO3545V3O/


Re: [ovirt-users] CD drive not showing

2018-03-19 Thread Alex Crow

Maybe try removing the ISO domain and then importing it.

Alex


On 19/03/18 08:17, Junaid Jadoon wrote:

Hi,
Cd drive is not showing in windows 7 VM.

Please help me out???


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] change CD not working

2018-03-16 Thread Alex Crow

On 15/03/18 18:55, Junaid Jadoon wrote:



Ovirt engine and node version are 4.2.

"Error while executing action Change CD: Failed to perform "Change CD" 
operation, CD might be still in use by the VM.
Please try to manually detach the CD from withing the VM:
1. Log in to the VM
2 For Linux VMs, un-mount the CD using umount command;
For Windows VMs, right click on the CD drive and click 'Eject';"

Initially its working fine suddenly it giving above error.

Logs are attached.

please help me out

Regards,

Junaid

Detach and re-attach of the ISO domain should resolve this. It worked 
for me.


Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?!

2018-03-16 Thread Alex Crow

On 16/03/18 13:46, Nicolas Ecarnot wrote:

Le 16/03/2018 à 13:28, Karli Sjöberg a écrit :



Den 16 mars 2018 12:26 skrev Enrico Becchetti 
:


   Dear All,
    Does someone had seen that error ?


Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has 
insufficient workload to trigger such event).

And in every case, there was no actual lack of space.


    Enrico Becchetti Servizio di Calcolo e Reti
I think I remember something to do with thin provisioning and not 
being able to grow fast enough, so out of space. Are the VM's disk 
thick or thin?


All our storage domains are thin-prov. and served by iSCSI (Equallogic 
PS6xxx and 4xxx).


Enrico, do you know if a bug has been filed about this?

Did the VM remain paused? In my experience the VM just gets temporarily 
paused while the storage is expanded. RH confirmed to me in a ticket 
that this is expected behaviour.


If you need high write performance your VM disks should always be 
preallocated. We only use Thin Provision for VMs where we know that disk 
writes are low (eg network services, CPU-bound apps, etc).


Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Seamless SAN HA failovers with oVirt?

2017-06-06 Thread Alex Crow
I use Open-E in production on standard Intel (Supermicro) hardware. It 
can work in A/A (only in respect of ovirt, ie one LUN normally active on 
one server, the other LUN normally stays on the the other node) or A/P 
mode with multipath. Even in A/P mode it fails over quick enough to 
avoid VM pauses, using virtual IPs that float between the nodes. These 
modes are supported for both iSCSI or NFS.


I've also successfully implemented the same kind of rapid failover using 
standard linux HA tools (pacemaker and corosync). I've had migration 
times under 2s.


NFS has the added complications of filesystem locking. Maybe some of the 
docs on the CTDB site will help, as they ensure that NFS will be running 
on the same ports on each host and locking DBs will be shared between 
the two hosts. I have no idea if TrueNAS supports CTDB or similar 
distributed locking mechanisms.


Caveat: this is with iSCSI resources. I've not really run VMs in oVirt 
in anger against any kind of NFS storage yet. My boss wants to try 
Tintri, so I'll see how that works.


Cheers

Alex

On 06/06/17 18:45, Matthew Trent wrote:

Thanks for the replies, all!

Yep, Chris is right. TrueNAS HA is active/passive and there isn't a way around 
that when failing between heads.

Sven: In my experience with iX support, they have directed me to reboot the active node to initiate 
failover. There's "hactl takeover" and "hactl giveback" commends, but reboot 
seems to be their preferred method.

VMs going into a paused state and resuming when storage is back online sounds 
great. As long as oVirt's pause/resume isn't significantly slower than the 
30-or-so seconds the TrueNAS takes to complete its failover, that's a pretty 
tolerable interruption for my needs. So my next questions are:

1) Assuming the SAN failover DOES work correctly, can anyone comment on their 
experience with oVirt pausing/thawing VMs in an NFS-based active/passive SAN 
failover scenario? Does it work reliably without intervention? Is it reasonably 
fast?

2) Is there anything else in the oVirt stack that might cause it to "freak out" 
rather than gracefully pause/unpause VMs?

2a) Particularly: I'm running hosted engine on the same TrueNAS storage. Does 
that change anything WRT to timeouts and oVirt's HA and fencing and sanlock and 
such?

2b) Is there a limit to how long oVirt will wait for storage before doing 
something more drastic than just pausing VMs?

--
Matthew Trent
Network Engineer
Lewis County IT Services
360.740.1247 - Helpdesk
360.740.3343 - Direct line


From: users-boun...@ovirt.org  on behalf of Chris Adams 

Sent: Tuesday, June 6, 2017 7:21 AM
To: users@ovirt.org
Subject: Re: [ovirt-users] Seamless SAN HA failovers with oVirt?

Once upon a time, Juan Pablo  said:

Chris, if you have active-active with multipath: you upgrade one system,
reboot it, check it came active again, then upgrade the other.

Yes, but that's still not how a TrueNAS (and most other low- to
mid-range SANs) works, so is not relevant.  The TrueNAS only has a
single active node talking to the hard drives at a time, because having
two nodes talking to the same storage at the same time is a hard problem
to solve (typically requires custom hardware with active cache coherency
and such).

You can (and should) use multipath between servers and a TrueNAS, and
that protects against NIC, cable, and switch failures, but does not help
with a controller failure/reboot/upgrade.  Multipath is also used to
provide better bandwidth sharing between links than ethernet LAGs.

--
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to change DATA to ISO domain

2017-05-02 Thread Alex Crow
Create a new DATA domain with what you really need. Then make that the 
master and put the old one into maintenance and remove it.


That should do what you need.

Cheers

Alex

On 02/05/17 15:13, Langley, Robert wrote:
I went to add my ISO domain as the first storage domain. It got 
created as a DATA domain instead. I thought I selected "ISO". Now, 
Manager won't allow me to change or remove. What can I do?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: Multipath iSCSI with several IPs

2016-08-04 Thread Alex Crow

On 04/08/16 15:37, James Michels wrote:


Hi Dan,

The way you describe it I should have 2 storage backends with 2 IPs 
for the same SAN backend, right? The problem I see is that when you 
create a disk for a VM, you assign it to only one storage domain... so 
if the first fails, how will oVirt know which one should use as failover?



No, multipath is the *same* storage (ie LUN) exposed via two different 
network paths.


Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host non operationnal due to an iSCSI problem

2016-07-19 Thread Alex Crow



On 19/07/16 15:22, Alexis HAUSER wrote:

I'm still finding this hard to understand. If you are using iSCSI, you
/are/ using a server (called the "Target" in SCSI speak). Is the iSCSI
storage actually on the first host?

It's a Dell bay (or "storage array", I think that's the correct name in 
english...)


If it offers iSCSI, it is indeed a "Target". It's effectively a server 
as it's offering up a service on the network! That does explain my 
confusion though...





How did you actually do the
discovery and assign the LUNs? In the storage domain properties you
should be able to see the IP and port of the Targets, something like
"iqn.2012-02:foo-target1,192.168.10.10,3260", and you need to ensure the
second host can reach that IP and port to be able to see the storage.

Actually I jutt made a test : authorize access only to the second host (on the 
Dell bay), it works but only after setting it to maintenance mode and 
reactivate it.
Then authorizing both of the hosts (as initially) make them both working 
now...It doesn't really makes sense...
It is a very strange behavior. Maybe the second host needed to be set in 
maintenance mode then reactivated ?


Normally you should not have to do that. It could be that it was not 
allowed access, and you'd have to leave it a while for the host to retry.


At least you have it working now!






If you only have one physical interface on each host, there's not much
point doing multipath, as you don't stand to gain any performance or
resilience.

I didn't choose if it was multipath or not, someone only gave me access to this 
storage, but I understand what you mean. However, I'll certainly add bonding 
later.


Do you have any idea what setting maintenance mode and reactivating does on a 
host ? Does it restart some services ? I don't really understand what just 
happened actually...
All I know is that it is used for backup, reinstall and update.


Maintenance mode will migrate any running VMs off that host and enable 
you to do some tasks (including the ones you mention) that you can't do 
when it's running VMs. I believe it stops certain services as well, not 
sure which ones. It's perfectly safe and routine thing to do in RHEV/oVirt.


Cheers

Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host non operationnal due to an iSCSI problem

2016-07-19 Thread Alex Crow



On 19/07/16 11:13, Alexis HAUSER wrote:

I don't understand. iSCSI is a network storage protocol. What do you
mean by "I access it directly"? When you set up the first host with an
iSCSI storage domain, you would have had to point it to an IP address,
"discover" the LUNs and then attach to them. This sets up the domain.

As I explained, I don't use an iSCSI server, that's what I call accessing it 
"directly".
Yes, my iSCSI storage is working on my first Host, it has been discovered 
successfully, some VM are working on it etc...
The second host can discover it so I don't think it's a network issue.

 From the vdsm logs from second host ("the non working one") it looks like it 
can even see the LVM on it, right ?

Thread-32::DEBUG::2016-07-19 08:41:37,935::lvm::290::Storage.Misc.excCmd::(cmd) FAILED:  = 
'  Volume group "091e0526-1ff3-4ca3-863c-b911cf69277b" not found\n  Cannot process volume 
group 091e0526-1ff3-4ca3-863c-b911cf69277b\n';  = 5



It knows there there are volume groups from the database. You are 
correct, in that it cannot access the VGs/LVs.



On the second host, to access iSCSI storage you will have to have an
interface (defined in "Networks" in oVirt) that can connect to the same
IP and port the first host used.

Yes I have an network interface working on the second host, which is ovirtmgmt. 
I can access all other storage correctly from that host without errors. I can 
discover the iSCSI.

As it is a multipath iSCSI, does it need to acces one different path for each 
host ? I didn't set anything about iSCSI bonding, I use only one single 
interface on each host.


I'm still finding this hard to understand. If you are using iSCSI, you 
/are/ using a server (called the "Target" in SCSI speak). Is the iSCSI 
storage actually on the first host? How did you actually do the 
discovery and assign the LUNs? In the storage domain properties you 
should be able to see the IP and port of the Targets, something like 
"iqn.2012-02:foo-target1,192.168.10.10,3260", and you need to ensure the 
second host can reach that IP and port to be able to see the storage.


Multipath should not make any difference right now, but in order to use 
it effectively you should probably set up an iSCSI bond. The requirement 
for multipath to work properly is that the two physical interfaces on 
the host and initiator are in different IP subnets (and should ideally 
travel via different switches but that is not a hard requirement).


If you only have one physical interface on each host, there's not much 
point doing multipath, as you don't stand to gain any performance or 
resilience.


Cheers

Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host non operationnal due to an iSCSI problem

2016-07-19 Thread Alex Crow



On 19/07/16 10:38, Alexis HAUSER wrote:



Sounds like a possible networking problem. Have you assigned IP
addresses to the storage interfaces on this new host?

hum, What do you mean by storage interfaces ? The other host on the same 
network can access it.


If you're using
VLANs, are they set up correctly on your switch ports for the SAN network?

Yes

I don't use a server to share the iSCSI storage to the hosts, (I access it 
directly). Do I need it ? I saw that in the RHEV doc, on first part of the 
iSCSI section...


I don't understand. iSCSI is a network storage protocol. What do you 
mean by "I access it directly"? When you set up the first host with an 
iSCSI storage domain, you would have had to point it to an IP address, 
"discover" the LUNs and then attach to them. This sets up the domain.


On the second host, to access iSCSI storage you will have to have an 
interface (defined in "Networks" in oVirt) that can connect to the same 
IP and port the first host used.


Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host non operationnal due to an iSCSI problem

2016-07-19 Thread Alex Crow



On 19/07/16 09:52, Alexis HAUSER wrote:

Hi,


I just added a second host but it can't become operational, because it can't 
access to the iSCSI storage domain. My first question : is it normal or not, is 
RHEV really able to manage the fact an iSCSI LUN can be accessed from multiple 
hosts ?

Yes, each VM disk is a Logical Volume on that LUN.

Sounds like a possible networking problem. Have you assigned IP 
addresses to the storage interfaces on this new host? If you're using 
VLANs, are they set up correctly on your switch ports for the SAN network?


Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slowness in the deploying a vm from template

2016-07-14 Thread Alex Crow
Hi,

FYI the "thin" option is in the "resource allocation" tab when
"advanced" is selected in the New VM dialog.

Cheers

Alex

On 14/07/16 19:46, Karli Sjöberg wrote:
>
>
> Den 14 jul 2016 20:31 skrev Budur Nagaraju :
> >
> > HI
> >
> > When a deploy a vm from template its takes 10-15 Minutes to get
> deployed , is it a known one or any settings do i need to do ?
>
> Yeah, that's 'cause you're _cloning_ the new VM. What you want is to
> thin provision it, goes faster than a speeding bullet:)
>
> /K
>
> > am using oVirt3.5 version.
> >
> > Thanks,
> > Nagaraju
> >
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network problem with bonding and Windows guests

2016-05-25 Thread Alex Crow

On 25/05/16 10:28, Maxence Sartiaux wrote:

Hello,

I've a problem, all my ovirt hosts are linked with a bonding mode 4 
(802.3ad LACP) 2x10Gbps
Eveything is okay with unix guest but with Windows guest, i can ping 
but internet browsing is impossible (sometime i have a part of the 
page, very rare case)


If i remove the bonding and bridge one interface, windows work good.

I've tried with windows 7 and windows 10, guest additions are installed
VirtIO / rtl3189 tested, same problem

My bonding opts : mode=4 lacp_rate=1 miimon=100
Interface MTU 9000

Bonding mode 2 also tested


On the hosts, try setting LRO off on the members of your bond, see if it 
makes a difference


eg,

ethtool -K ens3f0 lro off
ethtool -K ens3f1 lro off

Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Educational use case question

2016-04-13 Thread Alex Crow
This certainly works. Console can be reached via a browser plugin or
Virt-Viewer (available for Windows). Self-hosted engine is the way to
go, and is production-ready, especially if you want to add more nodes later.

On 14/04/16 03:33, Michael Hall wrote:
> Yes but what about the student sitting on the Windows machine in the
> lab who wants to install and interact with her VM via it's GUI ...
> like is possible in Virtual Machine Manager on RHEL/CentOS 7 ...
> except she'd be doing it remotely via an in-browser console ... like
> Digital Ocean do for example.
>

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-13 Thread Alex Crow

Hi,

If you have set up VM disks as Thin Provisioned, the VM has to pause 
when the disk image needs to expand. You won't see this on VMs with 
preallocated storage.


It's not the SAN that's running out of space, it's the VM image needing 
to be expanded incrementally each time.


Cheers

Alex

On 13/04/16 12:04, nico...@devels.es wrote:

Hi Fred,

This is an iSCSI storage. I'm attaching the VDSM logs from the host 
where this machine has been running. Should you need any further info, 
don't hesitate to ask.


Thanks.

Regards.

El 2016-04-13 11:54, Fred Rolland escribió:

Hi,

What kind of storage do you have ? (ISCSI,FC,NFS...)
Can you provide the vdsm logs from the host where this VM runs ?

Thanks,

Freddy

On Wed, Apr 13, 2016 at 1:02 PM,  wrote:


Hi,

We're running oVirt 3.6.4.1-1. Lately we're seeing a bunch of
events like these:

2016-04-13 10:52:30,735 INFO
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-86) [60dea18f] VM
'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com [1]) moved from
'Up' --> 'Paused'
2016-04-13 10:52:30,815 INFO


[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]

(DefaultQuartzScheduler_Worker-86) [60dea18f] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com [1]
has been paused.
2016-04-13 10:52:30,898 ERROR


[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]

(DefaultQuartzScheduler_Worker-86) [60dea18f] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com [1]
has been paused due to no Storage space error.
2016-04-13 10:52:52,320 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-38) [] domain
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem. vds:
'host6.domain.com [2]'
2016-04-13 10:52:55,183 INFO
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-70) [3da0f3d4] VM
'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com [1]) moved from
'Paused' --> 'Up'
2016-04-13 10:52:55,318 INFO


[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]

(DefaultQuartzScheduler_Worker-70) [3da0f3d4] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com [1]
has recovered from paused back to up.

The storage domain is far from being full, though (400+ G available
right now). Could this be related to this other issue [1]? If not,
how could I debug what's going on?

Thanks.

 [1]: https://www.mail-archive.com/users@ovirt.org/msg32079.html
[3]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [4]




Links:
--
[1] http://vm.domain.com
[2] http://host6.domain.com
[3] https://www.mail-archive.com/users@ovirt.org/msg32079.html
[4] http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network tagging

2015-12-18 Thread Alex Crow
Just curious, why would you want to bond two different VLAN interfaces? 
It does not make much sense.


You can add a network with a VLAN and then use it on a bonded connection 
or an a single interface, if that's what you mean.


I just wish there was a way to make the VLAN ID optional, so you could 
use it tagged on a host with only a couple of ethernet interfaces, and 
then untagged on an interface on a machine it lots of eth interfaces (eg 
to spread the VLANs over multiple interfaces: imagine a host with 2x10G, 
one would be storage and the other for all VLANs tagged. Another host 
might have 8 or more interfaces, on that each VLAN could be untagged 
per-interface.)


Alex

On 18/12/15 13:17, Kevin COUSIN wrote:

Hi list,

Is it possible to add same VLAN as tagged for different purposes :

* VLAN XX as VM network and tagged to attach VM
* VLAN XX tagged for bonding with another VLAN on an interface


Regards

Kevin C
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Wishlist - Mix gluster and local storage in same data center

2015-11-08 Thread Alex Crow

Messed up the quote. removed it for clarity:
One thing that would render this whole issue moot is being able to use 
local fast storage on the hypervisor hosts, ie SSD or 3D-Xpoint drives 
in LVM-Cache to accelerate IOPS on shared storage.


The underlying stuff is already there, and it works. I've been using 
LVM cache for while in Centos 7.


Geting this to work in oVirt would be a killer feature.

Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute 
advice.
The information provided is correct to our knowledge & belief and must 
not
be used as a substitute for obtaining tax, regulatory, investment, 
legal or

any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 
7608 5300.

(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute 
advice.
The information provided is correct to our knowledge & belief and must 
not
be used as a substitute for obtaining tax, regulatory, investment, 
legal or

any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 
7608 5300.

(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Wishlist - Mix gluster and local storage in same data center

2015-11-08 Thread Alex Crow
There are many things that tie a VM to a host, like USB device 
passthrough, but that's not reason to remove all such support from 
oVirt, is it? In my case, I'd like to mix iSCSI and local storage, 
because I have a couple of systems that need higher disk I/O that I'd 
like to put on my shared storage. The two systems are redundant to each 
other, so that is taken care of at a different layer. The two systems 
don't however consume all the resources of the host machines (lots of 
CPU and RAM available). I'd like to make them nodes in my oVirt cluster, 
so those resources can be used for other VMs (that are on shared storage 
for that level of HA), but I can't do that (at least as far as I know, 
with oVirt 3.5). I thought that had been mentioned as a feature for 3.6, 
but I don't see it anywhere in the features or release notes, so I 
assume that functionality is still not available.


One thing that would render this whole issue moot is being able to use 
local fast storage on the hypervisor hosts, ie SSD or 3D-Xpoint drives 
in LVM-Cache to accelerate IOPS on shared storage.


The underlying stuff is already there, and it works. I've been using LVM 
cache for while in Centos 7.


Geting this to work in oVirt would be a killer feature.

Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.5 -> 3.6 engine upgrade failure

2015-11-05 Thread Alex Crow

Encountered "  "numberOsCpus "" at line 7, column 33.

Was expecting one of: "ram" ... "disksize" ... "numberOfCpus"

Typo bug maybe? See "...Os..." vs "...Of..."
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem migration VM

2015-09-23 Thread Alex Crow

On 23/09/15 13:54, Luca Bertoncello wrote:

Hi Alex


Did the host you are running the script on shut down before the
migration completed?

Apparently yes...


Thought so. The migration then cannot continue, obviously.




If you put the host in maintenance from the GUI, does it successfully
migrate off all VMs?

Yes, this happens!

Now, I think, the problem is that the system first unmount /run and then call 
my script, so that libvirt has no more possibility to successfully migrate the 
VMs...

Can someone suggest me a way to call my script as FIRST script on 
shutdown/reboot and to block the shutdown/reboot until my script complete?
This will solve the problem...


Add your systemd script as a "requires" entry in the systemd script 
responsible for shutting down the system?




Of course, I can try with a wrapper for /sbin/shutdown and /sbin/reboot, but 
this is not a nice solution...


Why don't you manage this from another machine, not from the hosts? Just 
have a script call the API to initiate maintenance, wait for the 
migration to complete, then call the API to shut down the host?


Or is it really too hard to do this from the GUI? I don't understand why 
you have such a hard requirement to be able to do this from the hosts - 
the whole point of Ovirt is that you don't have to manage your hosts on 
an individual basis!


Alex



Thanks

Mit freundlichen Grüßen

Luca Bertoncello

--
Besuchen Sie unsere Webauftritte:

www.queo.bizAgentur für Markenführung und Kommunikation
www.queoflow.comIT-Consulting und Individualsoftwareentwicklung

Luca Bertoncello
Administrator
Telefon:+49 351 21 30 38 0
Fax:+49 351 21 30 38 99
E-Mail: l.bertonce...@queo-group.com

queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem migration VM

2015-09-23 Thread Alex Crow


On 23/09/15 10:30, Luca Bertoncello wrote:

Hi list!

Sorry fort the previous E-Mail... problem on my Outlook... :(
Here again...

After a "war-week" I finally got a systemd-script  to put the host in
"maintenance" when a shutdown will started.
Now the problem is, that the automatically migration of the VM does NOT
work...

I see in the Web console the host will "Preparing for maintenance" and the
VM will start the migration, then the host is in "maintenance" and a couple of
seconds later the VM will be killed on the other host...
Did the host you are running the script on shut down before the 
migration completed?


If you put the host in maintenance from the GUI, does it successfully 
migrate off all VMs?


Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem migration VM

2015-09-23 Thread Alex Crow



On 23/09/15 14:11, Luca Bertoncello wrote:

Hi Alex


Thought so. The migration then cannot continue, obviously.

Obviously... :(


Can someone suggest me a way to call my script as FIRST script on

shutdown/reboot and to block the shutdown/reboot until my script
complete?

This will solve the problem...

Add your systemd script as a "requires" entry in the systemd script
responsible for shutting down the system?

What do you mean? Could you please explain, maybe with an example?


I've not done anything with systemd myself but it should be entirely 
possible. Ask systemd people.





Of course, I can try with a wrapper for /sbin/shutdown and /sbin/reboot,

but this is not a nice solution...

Why don't you manage this from another machine, not from the hosts? Just
have a script call the API to initiate maintenance, wait for the migration to
complete, then call the API to shut down the host?


OK, change the shutdown script on the host that NUT calls (ie *not* the 
systemd stuff) to do what I said above. That would work. You will have 
to be careful with your timing to ensure that migration can finish 
before your UPS runs out of juice though. NUT normally only starts the 
shutdown process on LOWBATT from the UPS.





Or is it really too hard to do this from the GUI? I don't understand why you
have such a hard requirement to be able to do this from the hosts - the
whole point of Ovirt is that you don't have to manage your hosts on an
individual basis!

Well, the problem is just one: if someone other Admin has to perform some works 
on the host that require a reboot, and he MUST log into a GUI to put the host 
in maintenance before the shutdown, we have a higher fail-possibility, if he 
forgot that.
And, of course, automatically shutdown cannot log into the GUI...


No, but it can have access to the API. You'd have to mess around with 
the systemd scripts to do this bit. As above, I think you'd be better 
off asking the systemd people than this list - as this is now getting 
quite offtopic for here.


Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Alex Crow



On 18/09/15 07:30, Luca Bertoncello wrote:


Hi all,

thank you very much for your answers.

So:

1)Of course, we have UPS. More than one, in our server room, and of 
course they will send an advice to the host if they are on battery




Good.

2)My question was: “what can I do, so that in case of Kernel Panic or 
similar, the VM will be migrated (live or not) to another host?”




You would make the VMs HA and acquire a fencing solution.

3)I’d like to have a shutdown-script on the host that put the host in 
Maintenance and wait until it’s done, so that I can just shutdown or 
reboot it without any other action. Is it possible? It would help to 
manage the power failure, too, assuming that other hosts have better 
UPS (it can be possible…)




You could probably use the REST API on the Ovirt Engine for that.But it 
might be better to have a highly available machine (VM or not) running 
something like Nagios or Icinga which would perform the monitoring of 
your hosts and connect to the REST API to perform maintenance and 
shutdown. You might also consider a UPS service like NUT (unless you're 
already doing it).


Cheers

Alex


Thanks a lot

Mit freundlichen Grüßen

Luca Bertoncello

--
Besuchen Sie unsere Webauftritte:

www.queo.biz <http://www.queo.biz/>



Agentur für Markenführung und Kommunikation

www.queoflow.com <http://www.queoflow.com/>



IT-Consulting und Individualsoftwareentwicklung


Luca Bertoncello
Administrator

Telefon:



+49 351 21 30 38 0

Fax:



+49 351 21 30 38 99

E-Mail:



l.bertonce...@queo-group.com <mailto:l.bertonce...@queo-group.com>


queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077

*From:*users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On 
Behalf Of *matthew lagoe

*Sent:* Thursday, September 17, 2015 9:56 PM
*To:* 'Alex Crow'; 'Yaniv Kaul'
*Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Automatically migrate VM between hosts in 
the same cluster


There are PDU’s that you can monitor power draw per port and that 
would kind of tell you if a PSU failed as the load would be 0


*From:*users-boun...@ovirt.org <mailto:users-boun...@ovirt.org> 
[mailto:users-boun...@ovirt.org] *On Behalf Of *Alex Crow

*Sent:* Thursday, September 17, 2015 12:31 PM
*To:* Yaniv Kaul <yk...@redhat.com <mailto:yk...@redhat.com>>
*Cc:* users@ovirt.org <mailto:users@ovirt.org>
*Subject:* Re: [ovirt-users] Automatically migrate VM between hosts in 
the same cluster


I don't really think this is practical:

- If the PSU failed, your UPS could alert you. If you have one...


If you have only one PSU in a host, a UPS is not going to stop you 
losing all the VMs on that host. OK, if you had N+1 PSUs, you may be 
able to monitor for this (IPMI/LOM/DRAC etc)and use the API to put a 
host into maintenance. Also a lot of people rely on low-cost white-box 
servers and decide that it's OK if a single PSU in a host dies, as, 
well, we have HA to start on other hosts. If they have N+1 PSUs in the 
hosts do they really have to migrate everything off? Swings and 
roundabouts really.


I'm also not sure I've seen any practical DC setups where a UPS can 
monitor the load for every single attached physical machine and figure 
out that one of the redundant PSUs in it has failed - I'd love to know 
if there are as that would be really cool.


- If the machine is going down in an ordinary flow, surely it can
be done.


Isn't that what "Maintenance mode" is for?

Â


Even if it was a network failure and the host was still up,
how would you live migrate a VM from a host you can't even
talk to?

It could be suspended to disk (local) - if the disk is available.

Then the decision if it is to be resumed from local disk or not
(as it might be HA'ed and is running elsewhere) need to be taken
later, of course.


Yes, but that's not even remotely possible with Ovirt right now. I was 
trying to be practical as the OP has only just started using Ovirt and 
I think it might be a bit much to ask him to start coding up what he'd 
like.


Â


The only way you could do it was if you somehow magically knew
far enough in advance that the host was about to fail (!) and
that gave enough time to migrate the machines off. But how
would you ever know that "machine quux.bar.net
<http://quux.bar.net> is going to fail in 7 minutes"?

I completely agree there are situations in which you can't foresee
the failure.Â

But in many, you can. In those cases, it makes sense for the host
to self-initiate 'move to maintenance' mode. The policy of what to
do when 'self-moving-to-maintenance-mode' could be pre-fetched
from the engine.

Y.


Hmm, I would 

Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Alex Crow



On 18/09/15 07:59, Luca Bertoncello wrote:

Hi Alex


2) My question was: "what can I do, so that in case of Kernel Panic or similar, the 
VM will be migrated (live or not) to another host?"

You would make the VMs HA and acquire a fencing solution.

What do you mean now? Have two VM and build a cluster? This is not what we 
want...
If it's possible, I'd like to have more Host AS CLUSTER with migration of the 
VM between the nodes...
I think oVirt already do that, since I have to create Clusters. For me a Cluster is not 
just "more nodes with the same CPU", but also something with load balancing or 
high availability...


No, use the built in HA in Ovirt. Which requires a fencing solution. 
This means that if a host goes down the VMs on it will autostart on 
another host in the cluster. Yes, it's a cluster of *hosts*. In fact in 
the Ovirt Interface you can see "clusters" in the tree view!


We use the built in Ovirt HA for >200 VMs over 6 hosts and it works just 
fine. We lost a host fairly recently and only a couple of people (out of 
over 300) who use the services of a single VM noticed.


Ovirt also supports balancing the VM load across hosts. That is part of 
the cluster policy. You can also set affinity and anti-affinity of VMs. 
You can also have hosts actually migrate off VMs when they are idle and 
the host to power down to save on your electricity bill. When load on 
other hosts reaches a limit, the host will be powered back on and VMs 
will start migrating to it.



3) I'd like to have a shutdown-script on the host that put the host in 
Maintenance and wait until it's done, so that I can just shutdown or reboot it 
without any other action. Is it possible? It would help to manage the power

  > failure, too, assuming that other hosts have better UPS (it can be 
possible.)

You could probably use the REST API on the Ovirt Engine for that.But it might 
be better to have a highly available machine (VM or not) running something like 
Nagios or Icinga which would perform the monitoring of your
  hosts and connect to the REST API to perform maintenance and shutdown. You 
might also consider a UPS service like NUT (unless you're already doing it).

Well, I already use NUT and we have Icinga monitoring the hosts.
But I can't understand what you mean and (more important!) how can I do it...

I checked the REST API and the CLI. I can write a little script to put an host 
in Maintenance or active it again, that's not the problem.
The problem is to have somewhat starting this script automatically...

Any suggestion? Icinga will not be the solution, since it checks the host every 
5 minutes or so, but we need to have this script started in seconds...

Thanks

Mit freundlichen Grüßen

Luca Bertoncello


The rest is out of scope of Ovirt. Icinga was just a suggestion. You 
don't have to use it - but you can change the check interval for a check 
to whatever you want. I was just stating that you should be wary of 
running your checks/scripts from a host if what you are trying to 
trigger on is that host going down or having issues.


And if 5 minutes is too long maybe you need a bigger UPS? We have 1 hour 
battery runtime on ours at the moment (think it's a 160kVA central UPS 
offhand).


Alex




--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Alex Crow



Well, right now my problem is to understand how can I "simulate" this Power 
management. Maybe in the future, if we decide that oVirt  is the right solution for us, 
we will buy some Power management system (APC or similar).
But now, for the experiments, I can't ask my boss to pay >500€ for a device 
that maybe we will not use...


GIven the outlay you will pay for your server kit regardless of using 
Ovirt or not, EUR 500 is nothing. As I said, what about ebay? I've just 
looked and there's a 24 outlet one for GBP 100, and an 8 port one for 
$44, both Buy It Now! And that's just a UK search.




By the way: I find really funny, that I can't define an Cluster with 
automatically migration of the VM without Power management. OK, I know what it 
can happens if the host is not really dead, but at least to allow that, with 
warnings and so on, should be nice...


Live migration works fine without power management. If a host gets too 
busy, VMs will migrate away from it.


Alex


.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] P2V shrink disk

2015-09-17 Thread Alex Crow



On 17/09/15 13:32, wodel youchi wrote:

Hi,

I am not sure if it's the right place to post this question.

I tried to do a P2V operation of a Win2k3 server to ovirt 3.5


the physical server is RAID5 of 3x146Go configured, about 290 Go 
usable space, but the server uses about 20Go only.


I am wondering how to shrink the disk of the migrated physical 
machine, do I have to do it before using P2V tool?


The same question for Linux servers.

Thanks.



I just use a parted Live CD. You can do it before or after, whichever is 
more convenient. I normally do it after a P2V or V2V as it's so easy to 
use an ISO image in Ovirt.


Cheers

Alex


.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] P2V shrink disk

2015-09-17 Thread Alex Crow

On 17/09/15 13:40, Alex Crow wrote:



On 17/09/15 13:32, wodel youchi wrote:

Hi,

I am not sure if it's the right place to post this question.

I tried to do a P2V operation of a Win2k3 server to ovirt 3.5


the physical server is RAID5 of 3x146Go configured, about 290 Go 
usable space, but the server uses about 20Go only.


I am wondering how to shrink the disk of the migrated physical 
machine, do I have to do it before using P2V tool?


The same question for Linux servers.

Thanks.



I just use a parted Live CD. You can do it before or after, whichever 
is more convenient. I normally do it after a P2V or V2V as it's so 
easy to use an ISO image in Ovirt.


Cheers

Alex



Actually that wouldn't work as such if you're P2V'ing straight to Ovirt. 
I'd suggest P2V'ing into Libvirt/Qemu-KVM, then running Parted Live on 
that VM. Shut down the VM and use qemu-img to shrink the image file, 
then do a V2V into Ovirt.


Cheers

Alex



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-17 Thread Alex Crow

On 17/09/15 14:25, Luca Bertoncello wrote:

Hi list!

I'm new by oVirt. Right now I configured a Cluster with two hosts and a VM.
I can migrate the VM between the two hosts without any problem, but what I need 
is, that the VM automatically migrate if an host is down.

The migration occurs just if I set an host in "Maintenance", but this is not 
(only!) what I need...

Can someone help me to configure oVirt (3.5) to automatically check the hosts 
and migrate the VM on host failure?

Thanks a lot!

Mit freundlichen Grüßen

Luca Bertoncello



You can't live migrate on a host failure - as the host has gone down and 
all the running VMs on it have as well! It would require clairvoyance to 
enable live migration in that situation.


However you can enable HA in VMs. If the host they are running on fails 
they will be restarted automatically on another host. NB. This 
*requires* power management so the failed host can be fenced.


Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-17 Thread Alex Crow

I don't really think this is practical:


- If the PSU failed, your UPS could alert you. If you have one...


If you have only one PSU in a host, a UPS is not going to stop you 
losing all the VMs on that host. OK, if you had N+1 PSUs, you may be 
able to monitor for this (IPMI/LOM/DRAC etc)and use the API to put a 
host into maintenance. Also a lot of people rely on low-cost white-box 
servers and decide that it's OK if a single PSU in a host dies, as, 
well, we have HA to start on other hosts. If they have N+1 PSUs in the 
hosts do they really have to migrate everything off? Swings and 
roundabouts really.


I'm also not sure I've seen any practical DC setups where a UPS can 
monitor the load for every single attached physical machine and figure 
out that one of the redundant PSUs in it has failed - I'd love to know 
if there are as that would be really cool.


- If the machine is going down in an ordinary flow, surely it can be 
done.


Isn't that what "Maintenance mode" is for?



Even if it was a network failure and the host was still up, how
would you live migrate a VM from a host you can't even talk to?


It could be suspended to disk (local) - if the disk is available.
Then the decision if it is to be resumed from local disk or not (as it 
might be HA'ed and is running elsewhere) need to be taken later, of 
course.


Yes, but that's not even remotely possible with Ovirt right now. I was 
trying to be practical as the OP has only just started using Ovirt and I 
think it might be a bit much to ask him to start coding up what he'd like.





The only way you could do it was if you somehow magically knew far
enough in advance that the host was about to fail (!) and that
gave enough time to migrate the machines off. But how would you
ever know that "machine quux.bar.net  is
going to fail in 7 minutes"?


I completely agree there are situations in which you can't foresee the 
failure.
But in many, you can. In those cases, it makes sense for the host to 
self-initiate 'move to maintenance' mode. The policy of what to do 
when 'self-moving-to-maintenance-mode' could be pre-fetched from the 
engine.

Y.


Hmm, I would love that to be true. But I've seen so many so called 
"corner-cases" that I now think the failure area in a datacenter is a 
fractal with infinite corners. Yes, you could monitor SMART on local 
drives, pick up uncorrected ECC errors, use "sensors" to check for 
sagging voltages or high temps, but I don't think you can ever hope to 
catch everything, and you could end up doing a migration "storm" for . 
I've had more than enough of "Enterprise Spec" switches suddenly going 
nuts and spamming corrupt MACs all over the LAN to know you can't ever 
account for everything.


I think it's better to adopt the model of redundancy in software and 
services, so no-one even notices if a VM host goes away, there's always 
something else to take up the slack. Just like the origins of the 
Internet - the network should be dumb and the applications should cope 
with it! Any infrastructure that can't cope with the loss of a few VMs 
for a few minutes probably needs a refresh.


Cheers

Alex





.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 3.6.0 nighty hosted-engine deploy quits silently on second host

2015-09-12 Thread Alex Crow

Hi.

I've manged to get a 3.6.0-pre self-hosted engine running on one Centos 
7.1 host (glusterfs-backed) but on the second host it the deploy process 
just quits with no error before I even get to configure the storage:


[root@olympia ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Continuing will configure this host for serving as hypervisor 
and create a VM where you have to install oVirt Engine afterwards.

  Are you sure you want to continue? (Yes, No)[Yes]: yes
  Configuration files: []
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150912174714-4jr1ru.log
  Version: otopi-1.4.0_master 
(otopi-1.4.0-0.0.master.20150910103318.gitdd73099.el7)

[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[root@olympia ~]#

My setup log ends like this:

2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/reboot=str:'/sbin/reboot'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/remote-viewer=str:'/bin/remote-viewer'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/service=str:'/sbin/service'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/sshd=str:'/sbin/sshd'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/sudo=str:'/bin/sudo'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/systemctl=str:'/bin/systemctl'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/truncate=str:'/bin/truncate'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/umount=str:'/bin/umount'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:510 ENV 
COMMAND/vdsm-tool=str:'/bin/vdsm-tool'
2015-09-12 17:15:12 DEBUG otopi.context context.dumpEnvironment:514 
ENVIRONMENT DUMP - END
2015-09-12 17:15:12 DEBUG otopi.context context._executeMethod:142 Stage 
programs METHOD otopi.plugins.otopi.services.systemd.Plugin._prog

rams
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show-environment'), exec

utable='None', cwd='None', env=None
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show-environment'

), rc=0
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:936 execute-output: ('/bin/systemctl', 
'show-environment') stdout:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
LANG=en_GB.UTF-8

2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:941 execute-output: ('/bin/systemctl', 
'show-environment') stderr:



2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
systemd._programs:64 registering systemd provider
2015-09-12 17:15:12 DEBUG otopi.context context._executeMethod:142 Stage 
programs METHOD otopi.plugins.otopi.services.rhel.Plugin._programs
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.rhel 
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show-environment'), 
executable='None', cwd='None', env=None
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.rhel 
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 
'show-environment'), rc=0
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.rhel 
plugin.execute:936 execute-output: ('/bin/systemctl', 
'show-environment') stdout:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
LANG=en_GB.UTF-8

2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.rhel 
plugin.execute:941 execute-output: ('/bin/systemctl', 
'show-environment') stderr:



2015-09-12 17:15:12 DEBUG otopi.context context._executeMethod:142 Stage 
programs METHOD otopi.plugins.otopi.services.openrc.Plugin._programs
2015-09-12 17:15:12 DEBUG otopi.context context._executeMethod:142 Stage 
programs METHOD 
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_services.Plugin._programs
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
systemd.status:105 check service ovirt-ha-agent status
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:828 execute: ('/bin/systemctl', 'status', 
'ovirt-ha-agent.service'), executable='None', cwd='None', env=None
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'status', 
'ovirt-ha-agent.service'), rc=3
2015-09-12 17:15:12 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:936 execute-output: ('/bin/systemctl', 'status', 
'ovirt-ha-agent.service') stdout:
ovirt-ha-agent.service - oVirt Hosted Engine High Availability 
Monitoring Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; 
disabled)

   Active: 

Re: [ovirt-users] Ovirt and Gluster

2015-07-28 Thread Alex Crow

Hi,

That does not provide any answers to the OP's questions. And it also 
still fails to *explicitly* mention that gfapi acceleration is not 
available unless you use oVirt 3.6.0 (which AFAIK is still in beta). I 
raised a documentation bug about that many months ago and I still feel 
it has not been adequately addressed (ie all the Feature Complete 
entries on the page you link suggests it's ready in Stable releases).


He asked about a production scenario. For all that are interested, I've 
had very poor results including VM image corruption on my home setup 
using plain old KVM on top of GlusterFS. If I can't even get a home 
infrastructure working to my satisfaction, what hope is there for a 
corporate production environment? I appreciate that this is not really 
the oVirt teams fault, but a lot of us are really frustrated in trying 
to find a reliable setup for oVirt/GlusterFS and keep banging our heads 
against the same wall.


For now for production loads on oVirt/RHEV, we'll be firmly sticking to 
iSCSI for our VM storage - I'd rather even roll my own failover using 
DRDB/corosync/pacemaker for the lowest capex. We'd love it if we could 
use Ceph RBD but for some reason RedHat don't seem to want to drive 
that, rather relying on OpenStack Cinder as an intermediary, which seems 
silly for a pure Virtualisation setup (ie not a multi-tenanted private 
cloud).


Perhaps, being from RedHat, you could suggest the closest community 
versions of oVirt and Gluster to your commercial equivalents of RHEV/RHSS?


Thanks

Alex

On 28/07/15 08:26, Raz Tamir wrote:
I think you can start from here: 
http://www.ovirt.org/Features/GlusterFS_Storage_Domain

If you have more questions, please ask




Thanks,
Raz Tamir
Red Hat Israel


*From: *John Gardeniers jgardeni...@objectmastery.com
*To: *users users@ovirt.org
*Sent: *Tuesday, July 28, 2015 2:15:12 AM
*Subject: *[ovirt-users] Ovirt and Gluster

Hi All,

What information is available regarding the compatibility of Ovirt and
Gluster? Is there a combination known to be stable and if so, what are
the relevant versions? I am asking with respect to a production system,
not an experimental lab environment.

regards,
John

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Alex Crow



On 06/07/15 16:01, Tiemen Ruiten wrote:
Also, while I can imagine why fencing might be a problem, what would 
be the issue with HA?

/users


Hi,

Fencing is required for HA. If a box hosting HA vms seem to have gone 
away, it *has* to be guaranteed those VMs are not running before they 
are restarted elsewhere. Otherwise there could be more than 1 VM 
accessing the same storage, which will corrupt the VM's disk and leave 
you in a far worse situation.


Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
Transact is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Kernel panic - not syncing: An NMI occurred on HP Dl360g6

2015-05-27 Thread Alex Crow

Hi,

If you've had a crash caused by an NMI it can often be a hardware issue.

Maybe worth running some H/W diagnostics.

Cheers

Alex

On 27/05/15 15:40, Daniel Helgenberger wrote:

Hello,

can somebody help me getting to the bottom of this panic? vdsm.log was
empty (stopped 10min prior panic) and is rotated away already (forgot to
save it...)

Might this be [1]? However, I do not have an active subscription so I
cannot acccess the helpdoc.

Thanks!

[1] https://access.redhat.com/solutions/442193


[64457.990217] kvm: zapping shadow pages for mmio generation wraparound
[64469.916912] kvm [21434]: vcpu0 unhandled rdmsr: 0x1ad
[71035.222120] ixgbe :04:00.0 ens1f0: NIC Link is Down
[71035.223699] storageA: port 1(ens1f0.2) entered disabled state
[71035.223932] storageB: port 1(ens1f0.3) entered disabled state
[71035.224173] server: port 1(ens1f0.10) entered disabled state
[71035.224447] workstations: port 1(ens1f0.11) entered disabled state
[71035.572619] bnx2 :02:00.0 enp2s0f0: NIC Copper Link is Down
[71036.225789] ovirtmgmt: port 1(enp2s0f0) entered disabled state
[71091.146266] hpwdt: Unexpected close, not stopping watchdog!
[71138.579788] ovirtmgmt: port 2(vnet0) entered disabled state
[71138.580067] device vnet0 left promiscuous mode
[71138.580079] ovirtmgmt: port 2(vnet0) entered disabled state
[71146.413292] Kernel panic - not syncing: An NMI occurred. Depending on
your system the reason for the NMI is logged in any one of the following
resources:
1. Integrated Management Log (IML)
2. OA Syslog
3. OA Forward Progress Log
4. iLO Event Log
[71146.739719] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G  I
--   3.10.0-229.4.2.el7.x86_64 #1
[71146.858234] Hardware name: HP ProLiant DL360 G6, BIOS P64 07/02/2013
[71146.934129]  a026d2d8 f7d5e6c540f56fc7 8807ffc05de0
81604eaa
[71147.022535]  8807ffc05e60 815fe71e 0008
8807ffc05e70
[71147.111065]  8807ffc05e10 f7d5e6c540f56fc7 8101bad9
c9000a806072
[71147.199595] Call Trace:
[71147.228779]  NMI  [81604eaa] dump_stack+0x19/0x1b
[71147.297491]  [815fe71e] panic+0xd8/0x1e7
[71147.354746]  [8101bad9] ? sched_clock+0x9/0x10
[71147.418233]  [a026c8ed] hpwdt_pretimeout+0xdd/0xe0 [hpwdt]
[71147.494192]  [8160d6d9] nmi_handle.isra.0+0x69/0xb0
[71147.562873]  [8160d8dd] do_nmi+0x1bd/0x340
[71147.622201]  [8160cb31] end_repeat_nmi+0x1e/0x2e
[71147.687768]  [8133d4d4] ? intel_idle+0xe4/0x170
[71147.752291]  [8133d4d4] ? intel_idle+0xe4/0x170
[71147.816818]  [8133d4d4] ? intel_idle+0xe4/0x170
[71147.881343]  EOE  [814aaa20] cpuidle_enter_state+0x40/0xc0
[71147.961483]  [814aab65] cpuidle_idle_call+0xc5/0x200
[71148.031208]  [8101d21e] arch_cpu_idle+0xe/0x30
[71148.094694]  [810c6985] cpu_startup_entry+0xf5/0x290
[71148.164420]  [815f33e7] rest_init+0x77/0x80
[71148.224791]  [81a45057] start_kernel+0x429/0x44a
[71148.290354]  [81a44a37] ? repair_env_string+0x5c/0x5c
[71148.361120]  [81a44120] ? early_idt_handlers+0x120/0x120
[71148.434999]  [81a445ee] x86_64_start_reservations+0x2a/0x2c
[71148.511997]  [81a44742] x86_64_start_kernel+0x152/0x175





--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
Transact is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Maximum VM memory

2015-05-10 Thread Alex Crow

On 10/05/15 13:17, Christophe TREFOIS wrote:

Dear ovirt users,

We have a special machine which has 1 TB RAM.

The idea is to have 3-4 VMs on there which should be split like 
85/5/5/5 % of total RAM.


I have three questions:

1. Is there any reason not to put up a VM with so much RAM


One reason I can think of - migration of a VM with 1TB ram will most 
likely never complete and time out. I have problems migrating machines 
with 64GB or more of RAM on RHEV,


If you don't need live migration then you should be OK.

Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] used engine-iso-uploader to upload ISO , but cant find the images

2015-05-06 Thread Alex Crow



On 06/05/15 06:36, John Joseph wrote:

Hi All
I finished installing   oVirto 3.5.2-1 on CentOS 6.6 64 bit OS and it is 
working  and now I am slowly epxloring other feaures.
I have uploaded a ISO image, and the message says that It is been uploaded as 
follows

# engine-iso-uploader -i ISO_DOMAIN  upload /root/CentOS-6.6-x86_64-Kazoo-0.iso
Please provide the REST API password for the admin@internal oVirt Engine user 
(CTRL+D to abort):
Uploading, please wait...
INFO: Start uploading /root/CentOS-6.6-x86_64-Kazoo-0.iso
WARNING: failed to refresh the list of files available in the ISO_DOMAIN ISO 
storage domain. Please refresh the list manually using the 'Refresh' button in 
the oVirt Webadmin console.
INFO: /root/CentOS-6.6-x86_64-Kazoo-0.iso uploaded successfully

When I try to referesh, I cannot see the ISO image so far, I did wait for 10 to 
15 min and then refreshed also
I have attached the screen shot for reference
thanks
Joseph John



You need to attach and activate your ISO domain.

Cheers

Alex


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] bonding 802.3ad mode

2015-03-18 Thread Alex Crow
The balancing on 802.3ad only occurs for different network flows based 
on a hash of source and destination MAC (or can be made to add IP 
addresses into the calculation). A single flow will only use a single 
NIC in ad mode.


Alex



On 18/03/15 16:17, Nathanaël Blanchet wrote:

Hi all,

I'm used to create a mode 4 bond0 interface with two 1 Gb/s interfaces 
on all my hosts, and ethtool bond0 gives me a functionnal 2000Mb/s. 
However, when importing a vm from the export domain (NFS with a speed 
of 4GB/s), I always have this alert:
Host siple has network interface which exceeded the defined threshold 
[95%] (em3: transmit rate[0%], receive rate [100%])
It seems that the second nic never works while the first one is 
overloaded.
Is it an expected behaviour? I believed that the flow was balanced 
between the two interfaces in 802.3ad mode.




--
This message has been scanned for viruses and
dangerous content by *MailScanner* http://www.mailscanner.info/, and is
believed to be clean.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
Transact is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change gluster primary

2015-01-24 Thread Alex Crow


On 20/01/15 20:46, John Gardeniers wrote:

Hi Alex,

I understand what you're saying and certainly there is no primary from 
the Gluster perspective. However, things are quite different as far as 
Ovirt/RHEV is concerned.


We had an incident last week where we had to take nix off-line. A 
network glitch then caused a our RHEV to briefly lose connection to 
jupiter. This resulted in all VMs crashing because the system was 
trying to reconnect to nix. It did not try to reconnect to jupiter, 
despite it being configured as the fail-over server. 


Hi,

As for the above, if you had quorum configured on the gluster side 
(either by applying the relevant recommended options on gluster, or by 
having created the volume on ovirt), loss of storage functionality is to 
be expected. In a two-node cluster if one goes down you lose quorum and 
the volume will become read only.  In this case Ovirt should really 
pause the VMs.


In the end I had to bring nix back on line. RHEV still wouldn't 
connect. Finally, I had to reboot each hypervisor. Even then, two of 
them still failed to reconnect and could only be brought back by 
performing a full reinstall (we're using the cut-down dedicated RH 
hypervisors, not the RHEL+hypervisor that you use). All in all, quite 
a disastrous situation that lost us a couple of hours. So yes, there 
is a primary from the Ovirt/RHEV perspective and I'm really 
disappointed in how the system completely failed to handled the 
situation.


Looks like there are some bugs there. When we have had storage issues on 
RHEV we see all our VMS pausing, not crashing. BTW we do use the ded. 
hypervisor (like oVirt node).


Cheers

Alex




regards,
John


On 21/01/15 00:20, Alex Crow wrote:

Hi John,

There isn't really a primary in gluster. If you're using a glusterfs 
storage domain, you could turn off nix and the VMs would continue 
to run (although you'd have to disable quorum if you currently have 
it enabled on the volume, and you'd have to repoint the domain at 
some later point). If you're using NFS access you would have to 
repoint your storage to the remaining machine immediately.


The only snag I can see is that you can't detach the master storage 
domain in Ovirt if any VMs are running. I think you'd have to shut 
the VMs down, put the storage domain into maintenance, and then edit it.


Cheers

Alex

On 19/01/15 23:44, John Gardeniers wrote:
We are using Gluster as our storage backend. Gluster is configured 
as 2 node replica. The two nodes are name nix and jupiter. At the 
Ovirt (RHEV really) end we have the gluster path configured as 
nix:/gluster-rhev, with a mount option of 
backupvolfile-server=jupiter.om.net. We now need to replace nix 
with a new server, which cannot have the same name. That new server 
will be the primary, with jupiter remaining the secondary.


We will have all VMs and hypervisors shut down when we make this 
change.


What is the best and/or easiest way to do this? Should we just 
disconnect the storage and re-attach it using the new gluster 
primary? If we do that will our VMs just work or do we need to take 
other steps?


An alternative, which I suspect will be somewhat controversial, 
would be to make a direct edit of the engine database. Would that 
work any better or does that add more dangers (assuming the edit is 
done correctly)?


regards,
John

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change gluster primary

2015-01-20 Thread Alex Crow

Hi John,

There isn't really a primary in gluster. If you're using a glusterfs 
storage domain, you could turn off nix and the VMs would continue to 
run (although you'd have to disable quorum if you currently have it 
enabled on the volume, and you'd have to repoint the domain at some 
later point). If you're using NFS access you would have to repoint your 
storage to the remaining machine immediately.


The only snag I can see is that you can't detach the master storage 
domain in Ovirt if any VMs are running. I think you'd have to shut the 
VMs down, put the storage domain into maintenance, and then edit it.


Cheers

Alex

On 19/01/15 23:44, John Gardeniers wrote:
We are using Gluster as our storage backend. Gluster is configured as 
2 node replica. The two nodes are name nix and jupiter. At the Ovirt 
(RHEV really) end we have the gluster path configured as 
nix:/gluster-rhev, with a mount option of 
backupvolfile-server=jupiter.om.net. We now need to replace nix with 
a new server, which cannot have the same name. That new server will be 
the primary, with jupiter remaining the secondary.


We will have all VMs and hypervisors shut down when we make this change.

What is the best and/or easiest way to do this? Should we just 
disconnect the storage and re-attach it using the new gluster primary? 
If we do that will our VMs just work or do we need to take other steps?


An alternative, which I suspect will be somewhat controversial, would 
be to make a direct edit of the engine database. Would that work any 
better or does that add more dangers (assuming the edit is done 
correctly)?


regards,
John

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
Transact is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration from Xenserver 6.2 to KVM.

2015-01-20 Thread Alex Crow
Is there a way to export the VM disk images from Xenserver? If so you 
should be able to spin them up on a KVM host and then use v2v to import 
these into oVirt.


Cheers

Alex

On 20/01/15 11:57, mots wrote:

-Ursprüngliche Nachricht-

Von:Yaniv Dary yd...@redhat.com
Gesendet: Die 20 Januar 2015 08:47
An: Kalil de A. Carvalho kali...@gmail.com
CC: users@ovirt.org
Betreff: Re: [ovirt-users] Migration from Xenserver 6.2 to KVM.



---
From: Kalil de A. Carvalho kali...@gmail.com
To: users@ovirt.org
Sent: Monday, January 19, 2015 7:43:31 PM
Subject: [ovirt-users] Migration from Xenserver 6.2 to KVM.

Hello all.

I work in a company that want test KVM/oVirt.

The problem is the current environment running Xenserver 6.2 and is mandatory 
that we can migrate from the Xenserver to KVM.

Research about it  I saw that is not supported.

I this true?

You can use a v2v tool to migrate VMs from Xen to KVM, I believe.

Should not be a problem.

That only works if it's a GNU/Linux installation running Xen, not with 
Xenserver. From what I've seen, the reason for this is that Xenserver doesn't 
include libvirt, so v2v can't connect to it.


Best regards.

--

Atenciosamente,

Kalil de A. Carvalho

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
Transact is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Put the engine inside hosts after installation

2015-01-17 Thread Alex Crow


On 16/01/15 19:26, Mario Giammarco wrote:



2015-01-16 12:37 GMT+01:00 Simone Tiraboschi stira...@redhat.com 
mailto:stira...@redhat.com:



HA capability is provided for other VMs by oVirt engine. But who
provide it if the engine itself is on a VM on the host that it's
managing?
HA for the Engine VM needs to be managed by the hosts and not the
Engine itself: so we have ovirt-hosted-engine-ha that ensure HA
for the engine VM, the engine cloud than provide HA for other VMs.

I am surprised. I supposed that HA was self provided by the cluster 
like in xenserver. So you tell me that is the engine that checks if 
servers and vms are on like in cloudstack?




This is just how any VM self-hosted setup would work. The 
'engine/management' VM has to have HA managed by something other than 
the engine itself - otherwise if the engine is down how would it know or 
be able to restart itself? In VMWare or Xenserver there would have to be 
a separate system other than that in the engine VM to make sure that the 
management engine VM is a) running on at least one host on the cluster 
and b) *cannot* be running on more than one host to avoid screwing its 
own storage volume (ie heartbeat/fencing).


Then this managed engine only has to take care of keeping its own VMs 
up. Logically I cannot see any other way this could possibly work - see 
chicken and egg!


Cheers

Alex


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gfapi, 3.5.1

2014-12-16 Thread Alex Crow

Hi,

Anyone know if this is due to work correctly in the next iteration of 3.5?

Thanks

Alex

On 09/12/14 10:33, Alex Crow wrote:

Hi,

Will the vdsm patches to properly enable libgfapi storage for VMs (and 
matching refactored code in the hosted-engine setup scripts) for VMs 
make it into 3.5.1? It's not in the snapshots yet it seems.


I notice it's in master/3.6 snapshot but something stops the HA stuff 
in self-hosted setups from connecting storage:


from Master test setup:
/var/log/ovirt-hosted-engine-ha/broker.log

MainThread::INFO::2014-12-08 
19:22:56,287::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) 
Found certificate common name: 172.17.10.50
MainThread::WARNING::2014-12-08 
19:22:56,395::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:11,501::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:26,610::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:41,717::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:56,824::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::ERROR::2014-12-08 
19:24:11,840::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed trying to connect storage:
MainThread::ERROR::2014-12-08 
19:24:11,840::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Error: 'Failed trying to connect storage' - trying to restart agent
MainThread::WARNING::2014-12-08 
19:24:16,845::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Restarting agent, attempt '8'
MainThread::INFO::2014-12-08 
19:24:16,855::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) 
Found certificate common name: 172.17.10.50
MainThread::WARNING::2014-12-08 
19:24:16,962::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:24:32,069::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:24:47,181::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:25:02,288::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:25:17,389::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::ERROR::2014-12-08 
19:25:32,404::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed trying to connect storage:
MainThread::ERROR::2014-12-08 
19:25:32,404::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Error: 'Failed trying to connect storage' - trying to restart agent
MainThread::WARNING::2014-12-08 
19:25:37,409::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Restarting agent, attempt '9'
MainThread::ERROR::2014-12-08 
19:25:37,409::agent::178::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Too many errors occurred, giving up. Please review the log and 
consider filing a bug.
MainThread::INFO::2014-12-08 
19:25:37,409::agent::118::ovirt_hosted_engine_ha.agent.agent.Agent::(run) 
Agent shutting down

(END) - Next: /var/log/ovirt-hosted-engine-ha/broker.log

vdsm.log:

Detector thread::DEBUG::2014-12-08 
19:20:45,458::protocoldetector::214::vds.MultiProtocolAcceptor::(_remove_connection) 
Removing connection 127.0.0.1:53083
Detector thread::DEBUG::2014-12-08 
19:20:45,458::BindingXMLRPC::1193::XmlDetector::(handleSocket) xml 
over http detected from ('127.0.0.1', 53083)
Thread-44::DEBUG::2014-12-08 
19:20:45,459::BindingXMLRPC::318::vds::(wrapper) client [127.0.0.1]
Thread-44::DEBUG::2014-12-08 
19:20:45,460::task::592::Storage.TaskManager.Task::(_updateState) 
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72

[ovirt-users] gfapi, 3.5.1

2014-12-09 Thread Alex Crow

Hi,

Will the vdsm patches to properly enable libgfapi storage for VMs (and 
matching refactored code in the hosted-engine setup scripts) for VMs 
make it into 3.5.1? It's not in the snapshots yet it seems.


I notice it's in master/3.6 snapshot but something stops the HA stuff in 
self-hosted setups from connecting storage:


from Master test setup:
/var/log/ovirt-hosted-engine-ha/broker.log

MainThread::INFO::2014-12-08 
19:22:56,287::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) 
Found certificate common name: 172.17.10.50
MainThread::WARNING::2014-12-08 
19:22:56,395::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:11,501::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:26,610::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:41,717::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:23:56,824::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::ERROR::2014-12-08 
19:24:11,840::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed trying to connect storage:
MainThread::ERROR::2014-12-08 
19:24:11,840::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Error: 'Failed trying to connect storage' - trying to restart agent
MainThread::WARNING::2014-12-08 
19:24:16,845::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Restarting agent, attempt '8'
MainThread::INFO::2014-12-08 
19:24:16,855::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) 
Found certificate common name: 172.17.10.50
MainThread::WARNING::2014-12-08 
19:24:16,962::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:24:32,069::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:24:47,181::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:25:02,288::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08 
19:25:17,389::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::ERROR::2014-12-08 
19:25:32,404::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Failed trying to connect storage:
MainThread::ERROR::2014-12-08 
19:25:32,404::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Error: 'Failed trying to connect storage' - trying to restart agent
MainThread::WARNING::2014-12-08 
19:25:37,409::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Restarting agent, attempt '9'
MainThread::ERROR::2014-12-08 
19:25:37,409::agent::178::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Too many errors occurred, giving up. Please review the log and consider 
filing a bug.
MainThread::INFO::2014-12-08 
19:25:37,409::agent::118::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent 
shutting down

(END) - Next: /var/log/ovirt-hosted-engine-ha/broker.log

vdsm.log:

Detector thread::DEBUG::2014-12-08 
19:20:45,458::protocoldetector::214::vds.MultiProtocolAcceptor::(_remove_connection) 
Removing connection 127.0.0.1:53083
Detector thread::DEBUG::2014-12-08 
19:20:45,458::BindingXMLRPC::1193::XmlDetector::(handleSocket) xml over 
http detected from ('127.0.0.1', 53083)
Thread-44::DEBUG::2014-12-08 
19:20:45,459::BindingXMLRPC::318::vds::(wrapper) client [127.0.0.1]
Thread-44::DEBUG::2014-12-08 
19:20:45,460::task::592::Storage.TaskManager.Task::(_updateState) 
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state init - 
state preparing
Thread-44::INFO::2014-12-08 
19:20:45,460::logUtils::48::dispatcher::(wrapper) Run and