Re: [ovirt-devel] [ovirt-users] [ANN] oVirt 4.2.1 Second Release Candidate is now available

2018-01-19 Thread Sandro Bonazzola
2018-01-19 13:02 GMT+01:00 Gabriel Stein :

> When will be the official 4.2.1 released? Looking forward the gateway
> bugfix(BZ 1528906) but I will wait for it...
>
>
We are tentatively targeting January 30th but it will depend on blockers /
regression that will be discovered in this release candidate testing.
Help testing the release candidates will translate in a more stable GA.
Since we discovered already a regression in a disaster recovery flow we are
now planning another (and hopefully last) release candidate next week.



> Best Regards,
>
> Gabriel
>
> Gabriel Stein
> --
> Gabriel Ferraz Stein
> Tel.: +49 (0)  170 2881531
>
> 2018-01-19 12:53 GMT+01:00 Sandro Bonazzola :
>
>> The oVirt Project is pleased to announce the availability of the oVirt
>> 4.2.1 Second Release Candidate, as of January 18th, 2017
>>
>> This update is a release candidate of the second in a series of
>> stabilization updates to the 4.2
>> series.
>> This is pre-release software. This pre-release should not to be used in
>> production.
>>
>> [WARNING] right after we finished to compose the release candidate we
>> discovered a regression in a disaster recovery flow causing wrong MAC
>> address to be assigned to re-imported VMs.
>>
>> This release is available now for:
>> * Red Hat Enterprise Linux 7.4 or later
>> * CentOS Linux (or similar) 7.4 or later
>>
>> This release supports Hypervisor Hosts running:
>> * Red Hat Enterprise Linux 7.4 or later
>> * CentOS Linux (or similar) 7.4 or later
>> * oVirt Node 4.2
>>
>> See the release notes [1] for installation / upgrade instructions and
>> a list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node will be available soon [2]
>>
>> Additional Resources:
>> * Read more about the oVirt 4.2.1 release highlights:http://www.ovirt.or
>> g/release/4.2.1/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.2.1/
>> [2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>> ___
>> Users mailing list
>> us...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ Jan 15th 2018 ] [ 006_migrations.migrate_vm ]

2018-01-19 Thread Arik Hadas
On Fri, Jan 19, 2018 at 12:46 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 18 Jan 2018, at 17:36, Arik Hadas  wrote:
>
>
>
> On Wed, Jan 17, 2018 at 9:41 PM, Milan Zamazal 
> wrote:
>
>> Dafna Ron  writes:
>>
>> > We had a failure in test 006_migrations.migrate_vm
>> > > er/4842/testReport/junit/%28root%29/006_migrations/migrate_vm/>.
>> >
>> > the migration failed with reason "VMExists"
>>
>> There are two migrations in 006_migrations.migrate_vm.  The first one
>> succeeded, but if I'm looking correctly into the logs, Engine didn't
>> send Destroy to the source host after the migration had finished.  Then
>> the second migration gets rejected by Vdsm, because Vdsm still keeps the
>> former Vm object instance in Down status.
>>
>> Since the test succeeds most of the time, it looks like some timing
>> issue or border case.  Arik, is it a known problem?  If not, would you
>> like to look into the logs, whether you can see what's happening?
>
>
> Your analysis is correct. That's a nice one actually!
>
> The statistics monitoring cycles of both hosts host-0 and host-1 were
> scheduled in a way that they are executed almost at the same time [1].
>
> Now, at 6:46:34 the VM was migrated from host-1 to host-0.
> At 6:46:42 the migration succeeded - we got events from both hosts, but
> only processed the one from the destination so the VM switched to Up.
> The next statistics monitoring cycle was triggered at 6:46:44 - again, the
> report of that VM from the source host was skipped because we processed the
> one from the destination.
> At 6:46:59, in the next statistics monitoring cycle, it happened again -
> the report of the VM from the source host was skipped.
> The next migration was triggered at 6:47:05 - the engine didn't manage to
> process any report from the source host, so the VM remained Down there.
>
> The probability of this to happen is extremely low.
>
>
> Why wasn't the migration rerun?
>

Good question, because a migration to a particular host (MigrateVmToServer)
was requested.
In this particular case, it seems that there are only two hosts defined so
changing it to MigrateVm wouldn't make any difference though.


>
> However, I think we can make a little tweak to the monitoring code to
> avoid this:
> "If we get the VM as Down on an unexpected host (that is, not the host we
> expect the VM to run on), do not lock the VM"
> It should be safe since we don't update anything in this scenario.
>
> [1] For instance:
> 2018-01-15 06:46:44,905-05 ... GetAllVmStatsVDSCommand ...
> VdsIdVDSCommandParametersBase:{hostId='873a4d36-55fe-4be1-
> acb7-8de9c9123eb2'})
> 2018-01-15 06:46:44,932-05 ... GetAllVmStatsVDSCommand ...
> VdsIdVDSCommandParametersBase:{hostId='31f09289-ec6c-42ff-
> a745-e82e8ac8e6b9'})
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ANN] oVirt 4.2.1 Second Release Candidate is now available

2018-01-19 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.2.1 Second Release Candidate, as of January 18th, 2017

This update is a release candidate of the second in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.

[WARNING] right after we finished to compose the release candidate we
discovered a regression in a disaster recovery flow causing wrong MAC
address to be assigned to re-imported VMs.

This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Node will be available soon [2]

Additional Resources:
* Read more about the oVirt 4.2.1 release highlights:
http://www.ovirt.org/release/4.2.1/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.2.1/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/

-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt hc master ] [ 19-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-19 Thread Dafna Ron
Hi,

we are failing hc master basic suite on test: 002_bootstrap.add_hosts







*Link and headline of suspected patches: Link to
Job:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/
Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/
(Relevant)
error snippet from the log: *2018-01-18 22:30:56,141-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [3e58f8ce] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
error has occurred during installation of Host lago_basic_suite_hc_host0:
Failed to execute stage 'Closing up': 'Plugin' object has no attribute
'exist'


**
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-users] [ANN] oVirt 4.2.1 Second Release Candidate is now available

2018-01-19 Thread Gabriel Stein
When will be the official 4.2.1 released? Looking forward the gateway
bugfix(BZ 1528906) but I will wait for it...

Best Regards,

Gabriel

Gabriel Stein
--
Gabriel Ferraz Stein
Tel.: +49 (0)  170 2881531

2018-01-19 12:53 GMT+01:00 Sandro Bonazzola :

> The oVirt Project is pleased to announce the availability of the oVirt
> 4.2.1 Second Release Candidate, as of January 18th, 2017
>
> This update is a release candidate of the second in a series of
> stabilization updates to the 4.2
> series.
> This is pre-release software. This pre-release should not to be used in
> production.
>
> [WARNING] right after we finished to compose the release candidate we
> discovered a regression in a disaster recovery flow causing wrong MAC
> address to be assigned to re-imported VMs.
>
> This release is available now for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
>
> This release supports Hypervisor Hosts running:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
> * oVirt Node 4.2
>
> See the release notes [1] for installation / upgrade instructions and
> a list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node will be available soon [2]
>
> Additional Resources:
> * Read more about the oVirt 4.2.1 release highlights:http://www.ovirt.
> org/release/4.2.1/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.2.1/
> [2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt hc master ] [ 19-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-19 Thread Yaniv Kaul
On Fri, Jan 19, 2018 at 5:06 PM, Dafna Ron  wrote:

> Hi,
>
> we are failing hc master basic suite on test: 002_bootstrap.add_hosts
>
>
>
>
>
>
>
> *Link and headline of suspected patches: Link to
> Job:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/
> (Relevant)
> error snippet from the log: *2018-01-18 22:30:56,141-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [3e58f8ce] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
> error has occurred during installation of Host lago_basic_suite_hc_host0:
> Failed to execute stage 'Closing up': 'Plugin' object has no attribute
> 'exist'
>
>
> **
>

Dafna,
The relevant log is[1], which shows:

2018-01-18 22:49:25,385-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'start',
'glusterd.service') stdout:
2018-01-18 22:49:25,385-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start',
'glusterd.service') stderr:
2018-01-18 22:49:25,385-0500 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
  File "/tmp/ovirt-xJomKMYufQ/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
  File
"/tmp/ovirt-xJomKMYufQ/otopi-plugins/ovirt-host-deploy/gluster/packages.py",
line 95, in _closeup
if self.services.exist('glustereventsd'):
AttributeError: 'Plugin' object has no attribute 'exist'


Y.

[1]
http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/exported-artifacts/test_logs/hc-basic-suite-master/post-002_bootstrap.py/lago-hc-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-20180118224925-192.168.200.4-7bbdac84.log


>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ Jan 15th 2018 ] [ 006_migrations.migrate_vm ]

2018-01-19 Thread Michal Skrivanek


> On 18 Jan 2018, at 17:36, Arik Hadas  wrote:
> 
> 
> 
> On Wed, Jan 17, 2018 at 9:41 PM, Milan Zamazal  > wrote:
> Dafna Ron > writes:
> 
> > We had a failure in test 006_migrations.migrate_vm
> >  >  
> > >.
> >
> > the migration failed with reason "VMExists"
> 
> There are two migrations in 006_migrations.migrate_vm.  The first one
> succeeded, but if I'm looking correctly into the logs, Engine didn't
> send Destroy to the source host after the migration had finished.  Then
> the second migration gets rejected by Vdsm, because Vdsm still keeps the
> former Vm object instance in Down status.
> 
> Since the test succeeds most of the time, it looks like some timing
> issue or border case.  Arik, is it a known problem?  If not, would you
> like to look into the logs, whether you can see what's happening?
> 
> Your analysis is correct. That's a nice one actually!
> 
> The statistics monitoring cycles of both hosts host-0 and host-1 were 
> scheduled in a way that they are executed almost at the same time [1].
> 
> Now, at 6:46:34 the VM was migrated from host-1 to host-0.
> At 6:46:42 the migration succeeded - we got events from both hosts, but only 
> processed the one from the destination so the VM switched to Up.
> The next statistics monitoring cycle was triggered at 6:46:44 - again, the 
> report of that VM from the source host was skipped because we processed the 
> one from the destination.
> At 6:46:59, in the next statistics monitoring cycle, it happened again - the 
> report of the VM from the source host was skipped.
> The next migration was triggered at 6:47:05 - the engine didn't manage to 
> process any report from the source host, so the VM remained Down there. 
> 
> The probability of this to happen is extremely low.

Why wasn't the migration rerun?

> However, I think we can make a little tweak to the monitoring code to avoid 
> this:
> "If we get the VM as Down on an unexpected host (that is, not the host we 
> expect the VM to run on), do not lock the VM"
> It should be safe since we don't update anything in this scenario.
>  
> [1] For instance:
> 2018-01-15 06:46:44,905-05 ... GetAllVmStatsVDSCommand ... 
> VdsIdVDSCommandParametersBase:{hostId='873a4d36-55fe-4be1-acb7-8de9c9123eb2'})
> 2018-01-15 06:46:44,932-05 ... GetAllVmStatsVDSCommand ... 
> VdsIdVDSCommandParametersBase:{hostId='31f09289-ec6c-42ff-a745-e82e8ac8e6b9'})
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel