Re: [ovirt-users] vm pauses with "vm has paused due to unknown storage error

2016-06-23 Thread Sahina Bose
Can you post the gluster mount logs from the node where paused VM was 
running (under 
/var/log/glusterfs/rhev-datarhev-data-center-mnt-glusterSD.log) 
?

Which version of glusterfs are you running?

On 06/24/2016 07:49 AM, Bill Bill wrote:


Hello,

Have 3 nodes running both oVirt and Gluster on 4 SSD’s each. At the 
moment, there are two physical nics, one has public internet access 
and the other is a non-routable network used for ovirtmgmt & gluster.


In the logical networks, I have selected gluster for the nonroutable 
network running ovirtmgmt and gluster however, two VM’s randomly pause 
for what seems like no reason. They can both be resumed without issue.


One test VM has 4GB of memory and a small disk – no problems with this 
one. Two others have 800GB disks and 32GB of RAM – both vm’s exhibit 
the same issue.


I also see these in the oVirt dashboard:

Failed to update OVF disks 9e60328d-29af-4533-84f9-633d87f548a7, OVF 
data isn't updated on those OVF stores (Data Center x, Storage 
Domain sr-volume01).


Jun 23, 2016 9:54:03 PM

VDSM command failed: Could not acquire resource. Probably resource 
factory threw an exception.: ()


///

VM x has been paused due to unknown storage error.

///

In the error log on the engine, I see these:

ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-7) [10caf93e] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM xx has been paused due to 
unknown storage error.


INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-11) [10caf93e] Correlation ID: null, Call 
Stack: null, Custom Event ID: -1, Message: VM xx has recovered 
from paused back to up.


///

Hostnames are all local to /etc/hosts on all servers – they also 
resolve without issue from each host.


//

2016-06-23 22:08:59,611 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3435:/mnt/data/sr-volume01' of volume 
'93e36cdc-ab1b-41ec-ac7f-966cf3856b59' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,614 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3637:/mnt/data/sr-volume01' of volume 
'93e36cdc-ab1b-41ec-ac7f-966cf3856b59' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,616 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3839:/mnt/data/sr-volume01' of volume 
'93e36cdc-ab1b-41ec-ac7f-966cf3856b59' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,618 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3435:/mnt/data/distributed' of volume 
'b887b05e-2ea6-496e-9552-155d658eeaa6' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,620 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3637:/mnt/data/distributed' of volume 
'b887b05e-2ea6-496e-9552-155d658eeaa6' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,622 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3839:/mnt/data/distributed' of volume 
'b887b05e-2ea6-496e-9552-155d658eeaa6' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,624 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3435:/mnt/data/iso' of volume 
'89f32457-c8c3-490e-b491-16dd27de0073' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,626 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3637:/mnt/data/iso' of volume 
'89f32457-c8c3-490e-b491-16dd27de0073' with correct network as no 
gluster network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'


2016-06-23 22:08:59,628 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not 

Re: [ovirt-users] Hosted Engine Spamming Transition E-Mails

2016-06-23 Thread Charles Tassell

Hi Didi,

  Unfortunately the box got reinstalled before I could grab the logs.  
I think I have an idea as to what caused the issue though. When I was 
originally registering the oVirt node one of the fiber channel volumes 
wouldn't detect so it just kept waiting to register with the hosted 
engine, saying it couldn't connect to the storage pool.  I rebooted the 
host which caused the FC volume to show up and it looked like everything 
was fine.  We were able to host VMs on the box but then it started 
spitting out the error E-Mails.  I migrated the VMs off to other hosts 
(2 went fine, 2 went into a paused state citing storage error) and then 
rebooted the host.  When the host rebooted I shutdown the VDSM service 
and started it manually from the shell prompt.  I'm not sure if it was 
right when the box came up or when I started VDSM manually, but that 
messed up our storage pool and crashed all the VMs in the cluster, 
putting them into a paused state.


  Sorry I couldn't grab the logs, one of the other techs had some spare 
time so he reinstalled the OS the same afternoon.


On 2016-06-21 04:34 PM, Yedidyah Bar David wrote:

On Tue, Jun 21, 2016 at 7:53 PM, Charles Tassell  wrote:

Hi Didi,

   Okay, I looked at the logs as you suggested and found one of the hosts was
showing that it couldn't connect to the local VDSM.  I restarted the host
and then tried running VDSM through the console so that I could see the
debugging output, and it crashed my storage domain taking down all the VMs
in the cluster. So that wasn't good... ;-)

   I'm going to leave that host down and reinstall it.  I think the issue was
that when I originally installed it there was a problem with attaching to
the FC storage.  I rebooted it and it seemed to be working fine, joined the
cluster et all, but when I ran VDSM manually it looked like it wanted to
reinitialize the storage or something.

Any chance you can open a bug about this and attach all the logs you can
get from this host before reinstalling it? Thanks!



On 2016-06-19 04:54 AM, Yedidyah Bar David wrote:

On Sat, Jun 18, 2016 at 6:04 PM, Charles Tassell 
wrote:

Hi Folks,

I'm having a strange issue with my 3.6 setup.  For the past few days
the
system has been spamming me with "The state machine changed state"
E-Mails.
First I get the "EngineUnexpectedlyDown-EngineDown", then
"EngineDown-EngineStart" then "EngineStart-EngineStarting"  I checked the
hosted engine VM and it's been up for 16 days and the system looks to be
running fine, so I'm wondering what's going on.

   And when I say spamming, I mean I'm getting 200+ E-Mails a day.  It
seems
to trigger every 20 minutes or so with about 2 minutes between each of
the 3
messages in the set.

Please check/share /var/log/ovirt-hosted-engine-ha/agent.log on your
hosts.

Thanks,







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vm pauses with "vm has paused due to unknown storage error

2016-06-23 Thread Bill Bill
Hello,

Have 3 nodes running both oVirt and Gluster on 4 SSD’s each. At the moment, 
there are two physical nics, one has public internet access and the other is a 
non-routable network used for ovirtmgmt & gluster.

In the logical networks, I have selected gluster for the nonroutable network 
running ovirtmgmt and gluster however, two VM’s randomly pause for what seems 
like no reason. They can both be resumed without issue.

One test VM has 4GB of memory and a small disk – no problems with this one. Two 
others have 800GB disks and 32GB of RAM – both vm’s exhibit the same issue.

I also see these in the oVirt dashboard:


Failed to update OVF disks 9e60328d-29af-4533-84f9-633d87f548a7, OVF data isn't 
updated on those OVF stores (Data Center x, Storage Domain sr-volume01).

Jun 23, 2016 9:54:03 PM

VDSM command failed: Could not acquire resource. Probably resource factory 
threw an exception.: ()

///

VM x has been paused due to unknown storage error.

///

In the error log on the engine, I see these:

ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-7) [10caf93e] Correlation ID: null, Call Stack: null, 
Custom Event ID: -1, Message: VM xx has been paused due to unknown storage 
error.

INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-11) [10caf93e] Correlation ID: null, Call Stack: null, 
Custom Event ID: -1, Message: VM xx has recovered from paused back to up.

///

Hostnames are all local to /etc/hosts on all servers – they also resolve 
without issue from each host.

//

2016-06-23 22:08:59,611 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3435:/mnt/data/sr-volume01' of volume 
'93e36cdc-ab1b-41ec-ac7f-966cf3856b59' with correct network as no gluster 
network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,614 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3637:/mnt/data/sr-volume01' of volume 
'93e36cdc-ab1b-41ec-ac7f-966cf3856b59' with correct network as no gluster 
network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,616 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3839:/mnt/data/sr-volume01' of volume 
'93e36cdc-ab1b-41ec-ac7f-966cf3856b59' with correct network as no gluster 
network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,618 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3435:/mnt/data/distributed' of volume 
'b887b05e-2ea6-496e-9552-155d658eeaa6' with correct network as no gluster 
network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,620 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3637:/mnt/data/distributed' of volume 
'b887b05e-2ea6-496e-9552-155d658eeaa6' with correct network as no gluster 
network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,622 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3839:/mnt/data/distributed' of volume 
'b887b05e-2ea6-496e-9552-155d658eeaa6' with correct network as no gluster 
network found in cluster '75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,624 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3435:/mnt/data/iso' of volume '89f32457-c8c3-490e-b491-16dd27de0073' with 
correct network as no gluster network found in cluster 
'75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,626 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3637:/mnt/data/iso' of volume '89f32457-c8c3-490e-b491-16dd27de0073' with 
correct network as no gluster network found in cluster 
'75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,628 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-76) [1c1cf4f] Could not associate brick 
'ovirt3839:/mnt/data/iso' of volume '89f32457-c8c3-490e-b491-16dd27de0073' with 
correct network as no gluster network found in cluster 
'75bd64de-04b2-4a99-9cd0-b63e919b9aca'
2016-06-23 22:08:59,629 INFO  

[ovirt-users] oVirt 4.0.0 - Cluster Compatibility Version

2016-06-23 Thread Scott
Hello list,

I've successfully upgraded oVirt to 4.0 from 3.6 on my engine and three
hosts.  However, it doesn't look like I can change the Cluster
Compatibility Version to 4.0.  It tells me I need to shut down all the VMs
in the cluster.  Except I use Hosted Engine.  How do I change the cluster
compatibility version when the engine is down?

Thanks in advance,
Scott
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-23 Thread Stefano Danzi

Hi!
After cleanin metadata yum do an update of vdsm:

[root@ovirt01 ~]# rpm -qva | grep vdsm
vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch
vdsm-infra-4.18.4.1-0.el7.centos.noarch
vdsm-cli-4.18.4.1-0.el7.centos.noarch
vdsm-python-4.18.4.1-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch
vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch
vdsm-4.18.4.1-0.el7.centos.x86_64
vdsm-api-4.18.4.1-0.el7.centos.noarch
vdsm-gluster-4.18.4.1-0.el7.centos.noarch
vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch

But this not solve the issue.

- Host haven't default gateway after a reboot
- Self Hosted engine don't start.

vdsm.log:
https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing

Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:

On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi 
wrote:


Hi!
I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start
the self hosted engine.


Hi Stefano, can you please try "yum clean metadata" "yum update"
again?
You should get vdsm 4.18.4.1, please let us know if this solve your
issue.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] InClusterUpgrade Scheduling Policy

2016-06-23 Thread Scott
Hi Roman,

Thanks for the detailed steps.  I follow the idea you have outlined and I
think its easier than what I thought of (moving my self hosted engine back
to physical hardware, upgrading and moving it back to self hosted).  I will
give it a spin in my build RHEV cluster tomorrow and let you know how I get
on.

Thanks again,
Scott

On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr  wrote:

> Hi Scott,
>
> On Thu, Jun 23, 2016 at 8:54 PM, Scott  wrote:
> > Hello list,
> >
> > I'm trying to upgrade a self-hosted engine RHEV environment running
> 3.5/el6
> > to 3.6/el7.  I'm following the process outlined in these two documents:
> >
> >
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
> > https://access.redhat.com/solutions/2300331
> >
> > The problem I'm having is I don't seem to be able to apply the
> > "InClusterUpgrade" policy (procedure 5.5, step 4).  I get the following
> > error:
> >
> > Can not start cluster upgrade mode, see below for details:
> > VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is
> configured
> > to be not migratable.
> >
> That is correct, only the he-agents on each host decide where the
> hosted engine VM can start
>
> > But the HostedEngine VM is not one I can edit due to being mid-upgrade.
> And
> > even if I could, the setting its complaining about can't be managed by
> the
> > engine (I tried in another RHEV instance).
> >
> Also true, it is very limited what you can currently do with the
> hosted engine VM.
>
>
> > Is this a bug?  What am I missing to be able to move on?  As it seems
> now,
> > the InClusterUpgrade scheduling policy is useless and can't actually be
> > used.
>
> That is indeed something the InClusterUpgrade does not take into
> consideration. I will file a bug report.
>
>  But what you can do is the following:
>
> You can create a temporary cluster, move one host and the hosted
> engine VM there, upgrade all hosts and then start the hosted-engine VM
> in the original cluster again.
>
> The detailed steps are:
>
> 1) Enter the global maintenance mode
> 2) Create a temporary cluster
> 3) Put one of the hosted engine hosts which does not currently host
> the engine into maintenance
> 4) Move this host to the temporary cluster
> 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it
> should not come up again since you are in maintenance mode)
> 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the
> host in the temporary cluster
> 7) Now you can enable the InClusterUpgrade policy on your main cluster
> 7) Proceed with your main cluster like described in
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
> 8) When all hosts are upgraded and InClusterUpgrade policy is disabled
> again, move the hosted-engine-vm back to the original cluster
> 9) Upgrade the last host
> 10) Migrate the last host back
> 11) Delete the temporary cluster
> 12) Deactivate maintenance mode
>
> Adding Sandro and Roy to keep me honest.
>
> Roman
>
> >
> > Thanks for any suggestions/help,
> > Scott
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] InClusterUpgrade Scheduling Policy

2016-06-23 Thread Roman Mohr
Hi Scott,

On Thu, Jun 23, 2016 at 8:54 PM, Scott  wrote:
> Hello list,
>
> I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6
> to 3.6/el7.  I'm following the process outlined in these two documents:
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
> https://access.redhat.com/solutions/2300331
>
> The problem I'm having is I don't seem to be able to apply the
> "InClusterUpgrade" policy (procedure 5.5, step 4).  I get the following
> error:
>
> Can not start cluster upgrade mode, see below for details:
> VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured
> to be not migratable.
>
That is correct, only the he-agents on each host decide where the
hosted engine VM can start

> But the HostedEngine VM is not one I can edit due to being mid-upgrade.  And
> even if I could, the setting its complaining about can't be managed by the
> engine (I tried in another RHEV instance).
>
Also true, it is very limited what you can currently do with the
hosted engine VM.


> Is this a bug?  What am I missing to be able to move on?  As it seems now,
> the InClusterUpgrade scheduling policy is useless and can't actually be
> used.

That is indeed something the InClusterUpgrade does not take into
consideration. I will file a bug report.

 But what you can do is the following:

You can create a temporary cluster, move one host and the hosted
engine VM there, upgrade all hosts and then start the hosted-engine VM
in the original cluster again.

The detailed steps are:

1) Enter the global maintenance mode
2) Create a temporary cluster
3) Put one of the hosted engine hosts which does not currently host
the engine into maintenance
4) Move this host to the temporary cluster
5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it
should not come up again since you are in maintenance mode)
6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the
host in the temporary cluster
7) Now you can enable the InClusterUpgrade policy on your main cluster
7) Proceed with your main cluster like described in
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
8) When all hosts are upgraded and InClusterUpgrade policy is disabled
again, move the hosted-engine-vm back to the original cluster
9) Upgrade the last host
10) Migrate the last host back
11) Delete the temporary cluster
12) Deactivate maintenance mode

Adding Sandro and Roy to keep me honest.

Roman

>
> Thanks for any suggestions/help,
> Scott
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-23 Thread Sandro Bonazzola
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi  wrote:

>
> Hi!
> I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self
> hosted engine.
>

Hi Stefano, can you please try "yum clean metadata" "yum update" again?
You should get vdsm 4.18.4.1, please let us know if this solve your issue.



>
> first thing is that the host network lose the degaut gateway
> configuration. But this is not the problem.
>
> Logs:
>
> ==> /var/log/ovirt-hosted-engine-ha/agent.log <==
> MainThread::INFO::2016-06-23
> 18:28:40,833::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> Reloading vm.conf from the shared storage domain
> MainThread::INFO::2016-06-23
> 18:28:40,833::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> Trying to get a fresher copy of vm configuration from the OVF_STORE
> MainThread::INFO::2016-06-23
> 18:28:44,535::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c,
> volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9
> MainThread::INFO::2016-06-23
> 18:28:44,582::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a,
> volUUID:3c477b06-063e-4f01-bd05-84c7d467742b
> MainThread::INFO::2016-06-23
> 18:28:44,674::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> Extracting Engine VM OVF from the OVF_STORE
> MainThread::INFO::2016-06-23
> 18:28:44,675::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> OVF_STORE volume path:
> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b
> MainThread::INFO::2016-06-23
> 18:28:44,682::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> Found an OVF for HE VM, trying to convert
> MainThread::INFO::2016-06-23
> 18:28:44,684::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> Got vm.conf from OVF_STORE
> MainThread::INFO::2016-06-23
> 18:28:44,684::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
> Initializing ha-broker connection
> MainThread::INFO::2016-06-23
> 18:28:44,685::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor ping, options {'addr': '192.168.1.254'}
>
> ==> /var/log/ovirt-hosted-engine-ha/broker.log <==
> Thread-25::ERROR::2016-06-23
> 18:28:44,697::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
> Error while serving connection
> Traceback (most recent call last):
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
> line 166, in handle
> data)
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
> line 299, in _dispatch
> .set_storage_domain(client, sd_type, **options)
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> line 66, in set_storage_domain
> self._backends[client].connect()
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> line 400, in connect
> volUUID=volume.volume_uuid
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> line 245, in _get_volume_path
> volUUID
>   File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
> return self.__send(self.__name, args)
>   File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
> verbose=self.__verbose
>   File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
> return self.single_request(host, handler, request_body, verbose)
>   File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
> self.send_content(h, request_body)
>   File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
> connection.endheaders(request_body)
>   File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
> self._send_output(message_body)
>   File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
> self.send(msg)
>   File "/usr/lib64/python2.7/httplib.py", line 797, in send
> self.connect()
>   File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in
> connect
> sock = socket.create_connection((self.host, self.port), self.timeout)
>   File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
> raise err
> error: [Errno 101] Network is unreachable
>
> ==> /var/log/ovirt-hosted-engine-ha/agent.log <==
> MainThread::INFO::2016-06-23
> 18:28:44,697::hosted_engine::602::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
> 

[ovirt-users] InClusterUpgrade Scheduling Policy

2016-06-23 Thread Scott
Hello list,

I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6
to 3.6/el7.  I'm following the process outlined in these two documents:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
https://access.redhat.com/solutions/2300331

The problem I'm having is I don't seem to be able to apply the
"InClusterUpgrade" policy (procedure 5.5, step 4).  I get the following
error:

Can not start cluster upgrade mode, see below for details:
VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured
to be not migratable.

But the HostedEngine VM is not one I can edit due to being mid-upgrade.
And even if I could, the setting its complaining about can't be managed by
the engine (I tried in another RHEV instance).

Is this a bug?  What am I missing to be able to move on?  As it seems now,
the InClusterUpgrade scheduling policy is useless and can't actually be
used.

Thanks for any suggestions/help,
Scott
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ?==?utf-8?q? vds.dispatcher ERROR SSL in ovirt 4.0

2016-06-23 Thread Piotr Kliczewski
Please share the engine log.

On Thu, Jun 23, 2016 at 8:07 PM, Claude Durocher
 wrote:
> I did a complete reinstall of ovirt 4.0 (with hosted engine appliance) and
> the error is there with a single host after minimum configuration (add a
> single nfs storage domain).
>
> The engine.log file doesn't content any irregularities.
>
>
>
> Le Mercredi, Juin 22, 2016 17:18 EDT, "Claude Durocher"
>  a écrit:
>
>
>
>
> Here's a more complete log of vdsm with the error :
>
> https://drive.google.com/file/d/0B1CFwOEG9nMtcTR1Y3VWYjdJMnM/view?usp=sharing
>
> I inserted a few blank lines to highlight the errors.
>
>
>
>
> Le Vendredi, Juin 10, 2016 04:57 EDT, Piotr Kliczewski
>  a écrit:
>
>
> Claude,
>
> Please look for "ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from ". The last part of this log message
> contains peername.
> I should help to understand which client is connecting.
>
> From the message I see that the client is disconnecting and as a
> result we get: 'Connection reset by peer'
>
> Please let us know about your findings.
>
> Thanks,
> Piotr
>
> On Wed, Jun 8, 2016 at 11:49 PM, Claude Durocher
>  wrote:
>> I'm testing ovirt 4.0 rc1 on centos 7 (hosted engine on nfs). Every 15
>> seconds or so, I receive the following error:
>>
>> journal: vdsm vds.dispatcher ERROR SSL error during reading data: (104,
>> 'Connection reset by peer')
>>
>> Any ideas on how to debug this?
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-23 Thread Michal Skrivanek

> On 23 Jun 2016, at 18:36, Stefano Danzi  wrote:
> 
> 
> Hi!
> I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self 
> hosted engine.
> 
> first thing is that the host network lose the degaut gateway configuration. 
> But this is not the problem.
> 
> Logs:
> 
> ==> /var/log/ovirt-hosted-engine-ha/agent.log <==
> MainThread::INFO::2016-06-23 
> 18:28:40,833::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
>  Reloading vm.conf from the shared storage domain
> MainThread::INFO::2016-06-23 
> 18:28:40,833::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>  Trying to get a fresher copy of vm configuration from the OVF_STORE
> MainThread::INFO::2016-06-23 
> 18:28:44,535::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>  Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, 
> volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9
> MainThread::INFO::2016-06-23 
> 18:28:44,582::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>  Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, 
> volUUID:3c477b06-063e-4f01-bd05-84c7d467742b
> MainThread::INFO::2016-06-23 
> 18:28:44,674::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>  Extracting Engine VM OVF from the OVF_STORE
> MainThread::INFO::2016-06-23 
> 18:28:44,675::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>  OVF_STORE volume path: 
> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b
> MainThread::INFO::2016-06-23 
> 18:28:44,682::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>  Found an OVF for HE VM, trying to convert
> MainThread::INFO::2016-06-23 
> 18:28:44,684::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>  Got vm.conf from OVF_STORE
> MainThread::INFO::2016-06-23 
> 18:28:44,684::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
>  Initializing ha-broker connection
> MainThread::INFO::2016-06-23 
> 18:28:44,685::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
>  Starting monitor ping, options {'addr': '192.168.1.254'}
> 
> ==> /var/log/ovirt-hosted-engine-ha/broker.log <==
> Thread-25::ERROR::2016-06-23 
> 18:28:44,697::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
>  Error while serving connection
> Traceback (most recent call last):
>  File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
> line 166, in handle
>data)
>  File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
> line 299, in _dispatch
>.set_storage_domain(client, sd_type, **options)
>  File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
>  line 66, in set_storage_domain
>self._backends[client].connect()
>  File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
>  line 400, in connect
>volUUID=volume.volume_uuid
>  File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
>  line 245, in _get_volume_path
>volUUID
>  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
>return self.__send(self.__name, args)
>  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
>verbose=self.__verbose
>  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
>return self.single_request(host, handler, request_body, verbose)
>  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
>self.send_content(h, request_body)
>  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
>connection.endheaders(request_body)
>  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
>self._send_output(message_body)
>  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
>self.send(msg)
>  File "/usr/lib64/python2.7/httplib.py", line 797, in send
>self.connect()
>  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in 
> connect
>sock = socket.create_connection((self.host, self.port), self.timeout)
>  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
>raise err
> error: [Errno 101] Network is unreachable
> 
> ==> /var/log/ovirt-hosted-engine-ha/agent.log <==
> MainThread::INFO::2016-06-23 
> 18:28:44,697::hosted_engine::602::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
>  Failed set the storage domain: 'Failed to set storage domain VdsmBackend, 
> options {'hosted-engine.lockspace': 
> 

Re: [ovirt-users] ?==?utf-8?q? vds.dispatcher ERROR SSL in ovirt 4.0

2016-06-23 Thread Claude Durocher

I did a complete reinstall of ovirt 4.0 (with hosted engine appliance) and the 
error is there with a single host after minimum configuration (add a single nfs 
storage domain).

The engine.log file doesn't content any irregularities.


Le Mercredi, Juin 22, 2016 17:18 EDT, "Claude Durocher" 
 a écrit:
  Here's a more complete log of vdsm with the error :

https://drive.google.com/file/d/0B1CFwOEG9nMtcTR1Y3VWYjdJMnM/view?usp=sharing

I inserted a few blank lines to highlight the errors.




Le Vendredi, Juin 10, 2016 04:57 EDT, Piotr Kliczewski 
 a écrit:
 Claude,

Please look for "ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from ". The last part of this log message
contains peername.
I should help to understand which client is connecting.

>From the message I see that the client is disconnecting and as a
result we get: 'Connection reset by peer'

Please let us know about your findings.

Thanks,
Piotr

On Wed, Jun 8, 2016 at 11:49 PM, Claude Durocher
 wrote:
> I'm testing ovirt 4.0 rc1 on centos 7 (hosted engine on nfs). Every 15
> seconds or so, I receive the following error:
>
> journal: vdsm vds.dispatcher ERROR SSL error during reading data: (104,
> 'Connection reset by peer')
>
> Any ideas on how to debug this?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

 

 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-23 Thread Stefano Danzi


Hi!
I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self 
hosted engine.

first thing is that the host network lose the degaut gateway configuration. But 
this is not the problem.

Logs:

==> /var/log/ovirt-hosted-engine-ha/agent.log <==
MainThread::INFO::2016-06-23 
18:28:40,833::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Reloading vm.conf from the shared storage domain
MainThread::INFO::2016-06-23 
18:28:40,833::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2016-06-23 
18:28:44,535::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, 
volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9
MainThread::INFO::2016-06-23 
18:28:44,582::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, 
volUUID:3c477b06-063e-4f01-bd05-84c7d467742b
MainThread::INFO::2016-06-23 
18:28:44,674::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
 Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-06-23 
18:28:44,675::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
 OVF_STORE volume path: 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b
MainThread::INFO::2016-06-23 
18:28:44,682::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Found an OVF for HE VM, trying to convert
MainThread::INFO::2016-06-23 
18:28:44,684::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Got vm.conf from OVF_STORE
MainThread::INFO::2016-06-23 
18:28:44,684::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Initializing ha-broker connection
MainThread::INFO::2016-06-23 
18:28:44,685::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor ping, options {'addr': '192.168.1.254'}

==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-25::ERROR::2016-06-23 
18:28:44,697::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
 Error while serving connection
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 166, in handle
data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 299, in _dispatch
.set_storage_domain(client, sd_type, **options)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
 line 66, in set_storage_domain
self._backends[client].connect()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
 line 400, in connect
volUUID=volume.volume_uuid
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
 line 245, in _get_volume_path
volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect
sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable

==> /var/log/ovirt-hosted-engine-ha/agent.log <==
MainThread::INFO::2016-06-23 
18:28:44,697::hosted_engine::602::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Failed set the storage domain: 'Failed to set storage domain VdsmBackend, options 
{'hosted-engine.lockspace': 
'7B22696D6167655F75756964223A202265663131373139322D623564662D346534362D383939622D6262663862663135222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202230613363393433652D633032392D343134372D623364342D396366353364663161356262227D',
 'sp_uuid': 

[ovirt-users] New oVirt-Live (4.0.0) is available for download

2016-06-23 Thread Lev Veyde
Hi,

The new oVirt-Live 4.0.0 is available for download.

You can download it from:
http://plain.resources.ovirt.org/pub/ovirt-4.0/iso/ovirt-live/ovirt-live-4.0.0.iso

Thanks in advance,
Lev Veyde.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.0.0 Final Release is now available

2016-06-23 Thread Sandro Bonazzola
The oVirt Project is pleased to announce today the general availability of
oVirt 4.0.0.

This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0

Please take a look at our community page[1] to know how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].

See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO will be available soon. [4]
* A new oVirt Next Generation Node will be available soon [4].
* A new oVirt Engine Appliance will be available soon.
* A new oVirt Guest Tools ISO is already available [4].
* Mirrors[5] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.0 release highlights:
http://www.ovirt.org/release/4.0.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.0/
[4] http://resources.ovirt.org/pub/ovirt-4.0/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network Interface order changed after reboot

2016-06-23 Thread ovirt

Hi List,

I have two nodes (running CentOS 7) and the network interface order 
changed for some interfaces after every reboot.


The configurations are done through the oVirt GUI. So the ifcfg-ethX 
scripts are configured automatically by VDSM.


Is there any option to get this configured to be stable?

Best regards and thank you

Christoph

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Image format after Live Storage Migration

2016-06-23 Thread Raz Tamir
Hi all,
Before 4.0, when we performed a Live Storage Migration on a vm disk we
expected from the disk format to become a qcow because of the auto
generated snapshot as part of the live storage migration process.
>From 4.0 this process got changed and after the disk migration finishes,
the auto generated snapshot is removed (Live Merge), so the expectation
also changed and the format should be the original format of the disk


--

Thanks,
Raz Tamir
Red Hat Israel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.7 Fifth Release Candidate is now available for testing

2016-06-23 Thread Sven Kieske
On 22/06/16 18:19, Yaniv Kaul wrote:
>> Hi Rafael,
>> >
>> > do you have an ETA for the version?
>> >
>> > The USB problem is a bit of a problem for me.
>> > Or it is possible to go to the release candidate now and later to the
>> > final version?
>> >
> Yes.
> Y.
> 
> 

Well I had some problems in my test lab upgrading from
3.6.6rcX to 3.6.6 so I would not really advise to upgrade
from RC to release but ymmv (I hit a bug with vdsm communication
to engine).

HTH

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +495772 293100
F: +495772 29
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Will not communicate with VCenter

2016-06-23 Thread Shahar Havivi
On 22.06.16 09:57, JC Clark wrote:
> I am trying to establish a VCenter as an External Provider.  The when I try
> to "Test" the connection, the ovirt error message says that I have "failed
> to communicate".
We have a fixed upstream bug of that
https://bugzilla.redhat.com/show_bug.cgi?id=1293591

As a workaround you need to select oVirt DataCenter and not the default "Any
Data Center" (its in the upper gray panel).
 
 Shahar.

> 
> The Vcenter logs clearly shows the logging in and logging out of the ovirt
> administrator to the vCenter instance.
> 
> And the final error message says:
>   
> Jun 22, 2016 9:49:06 AM
>   
>   
> Provider VMWare was updated. (User: admin@internal)
>   
> Jun 22, 2016 9:47:01 AM
>   
>   
> Failed to retrieve VMs information from external server
> vpx://vcenter.mcsaipan.net/MCS1/vmware.mcsaipan.net?no_verify=1
> 
> I am using oVirt 3.6.6
> Vcenter Server Appliance
> no Cluster
> 
> 
> Uncaught exception occurred. Please tryreloading
> the page. Details: Exception
> caught: Exception caught: (TypeError) __gwt$exception: : Cannot
> read property 'f' of null
> Please have your administrator check the UI logs
> 
> This guy pops up when I try to connect.
> 
> Anyway just looking for a clue. Any suggestions are welcome.
> 
> Thanks

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] new version of qemu

2016-06-23 Thread Michal Skrivanek

> On 23 Jun 2016, at 04:56, qinglong.d...@horebdata.cn wrote:
> 
> Hi, all
> I have found that the latest version of qemu has been updated to 2.6. 
> In the latest version it supports a new virtio-gpu device which supprts 
> accelerated 2D and 3D. Qemu 2.3 is used in ovirt 3.6 for now. So I wonder 
> when the version of qemu will be updated in ovirt or if I can update the 
> version of qemu in ovirt myself.

Once this version finds its way into RHEL and CentOS, which should happen in 
the next update of 7.3 later this year
It doesn’t mean oVirt will support it out of the box as the virtio-gpu is not 
quite ready for prime time yet, it’s unlikely it will be supported/ready in 
oVirt 4.1. However the qemu 2.6 will be there.

Thanks,
michal

> Anyone can help? Thanks!
> 
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ?==?utf-8?q? vds.dispatcher ERROR SSL in ovirt 4.0

2016-06-23 Thread Piotr Kliczewski
Please check your engine/engine.log why it is attempting to connect
every monitoring cycle.
Is the host in 'NonResponsive' state?

On Wed, Jun 22, 2016 at 11:18 PM, Claude Durocher
 wrote:
> Here's a more complete log of vdsm with the error :
>
> https://drive.google.com/file/d/0B1CFwOEG9nMtcTR1Y3VWYjdJMnM/view?usp=sharing
>
> I inserted a few blank lines to highlight the errors.
>
>
>
>
> Le Vendredi, Juin 10, 2016 04:57 EDT, Piotr Kliczewski
>  a écrit:
>
>
> Claude,
>
> Please look for "ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from ". The last part of this log message
> contains peername.
> I should help to understand which client is connecting.
>
> From the message I see that the client is disconnecting and as a
> result we get: 'Connection reset by peer'
>
> Please let us know about your findings.
>
> Thanks,
> Piotr
>
> On Wed, Jun 8, 2016 at 11:49 PM, Claude Durocher
>  wrote:
>> I'm testing ovirt 4.0 rc1 on centos 7 (hosted engine on nfs). Every 15
>> seconds or so, I receive the following error:
>>
>> journal: vdsm vds.dispatcher ERROR SSL error during reading data: (104,
>> 'Connection reset by peer')
>>
>> Any ideas on how to debug this?
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users