-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

=====================================================================
                   Red Hat Security Advisory

Synopsis:          Moderate: Red Hat Virtualization Manager (RHV) bug fix 3.6.9
Advisory ID:       RHSA-2016:1929-01
Product:           Red Hat Virtualization
Advisory URL:      https://rhn.redhat.com/errata/RHSA-2016-1929.html
Issue date:        2016-09-21
CVE Names:         CVE-2016-4443 
=====================================================================

1. Summary:

An update for org.ovirt.engine-root is now available for RHEV Manager
version 3.6.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Relevant releases/architectures:

RHEV-M 3.6 - noarch

3. Description:

The Red Hat Virtualization Manager is a centralized management platform 
that allows system administrators to view and manage virtual machines. The 
Manager provides a comprehensive range of features including search 
capabilities, resource management, live migrations, and virtual 
infrastructure provisioning.
 
The Manager is a JBoss Application Server application that provides several
interfaces through which the virtual environment can be accessed and 
interacted with, including an Administration Portal, a User Portal, and a 
Representational State Transfer (REST) Application Programming Interface 
(API).

Security Fix(es):

* A flaw was found in RHEV Manager, where it wrote sensitive data to the
engine-setup log file. A local attacker could exploit this flaw to view
sensitive information such as encryption keys and certificates (which could
then be used to steal other sensitive information such as passwords).
(CVE-2016-4443)

This issue was discovered by Simone Tiraboschi (Red Hat).

Bug Fix(es):

* With this update, users are now warned to set the system in global
maintenance mode before running the engine-setup command. This is because
data corruption may occur if the engine-setup command is run without
setting the system into global maintenance mode. This update means that the
user is warned and the setup will be aborted if the system is not in global
maintenance mode and the engine is running in the hosted engine
configuration. (BZ#1359844)

* Previously, the update of the compatibility version of a cluster with
many running virtual machines that are installed with the guest-agent
caused a deadlock that caused the update to fail. In some cases, these
clusters could not be upgraded to a newer compatibility version. Now, the
deadlock in the database has been prevented so that a cluster with many
running virtual machines that are installed with the guest-agent can be
upgraded to newer compatibility version. (BZ#1369415)

* Previously, a virtual machine with a null CPU profile id stored in the
database caused a NPE when editing the virtual machine. Now, a virtual
machine with a null CPU profile id stored in the database is correctly
handled and the virtual machine can be edited. (BZ#1373090)

* Setting only one of the thresholds for power saving/evenly distributed
memory based balancing (high or low) can lead to unexpected results. For
example, when in power saving load balancing the threshold for memory over
utilized hosts was set with a value, and the threshold for memory under
utilized hosts was undefined thus getting a default value of 0. All hosts
were considered as under utilized hosts and were chosen as sources for
migration, but no host was chosen as a destination for migration.

This has now been changed so that when the threshold for memory under
utilized host is undefined, it gets a default value of Long.MAX. Now, when
the threshold for memory over utilized hosts is set with a value, and the
threshold for memory under utilized host is undefined, only over utilized
hosts will be selected as sources for migration, and destination hosts will
be hosts that are not over utilized. (BZ#1359767)

* Previously, recently added logs that printed the amount of virtual
machines running on a host were excessively written to the log file. Now,
the frequency of these log have been reduced by printing them only upon a
change in the number of virtual machines running on the host. (BZ#1367519)

4. Solution:

For details on how to apply this update, which includes the changes
described in this advisory, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1335106 - CVE-2016-4443 org.ovirt.engine-root: engine-setup logs contained 
information for extracting admin password
1346754 - [z-stream clone - 3.6.8] Storage QoS is not applying on a Live VM/disk
1349345 - [downsream clone - 3.6.8] Incorrect behavior of power saving weight 
module
1352462 - [z-stream clone - 3.6.8] Hosted Engine's disk is in Unassigned Status 
in the RHEV UI
1356127 - Can't upgrade to new cluster version when HE VM is running in it
1356483 - HE can't get started if a new vNIC was added with an empty profile.
1358286 - [z-stream clone - 3.6.9] [AAA] Can't add IPA directory users to VM 
permissions
1359767 - [z-stream clone - 3.6.9] All hosts filtered out when memory 
underutilized parameter left out
1359844 - [downsream clone - 3.6.9] engine-setup should warn users running 
within hosted engine to set to maintenance
1360775 - [downstream clone - 3.6.9] Pass through host CPU is not enabled with 
manual migration
1361500 - [downstream clone] CPU Profile is not assigned when changing it on a 
running VM
1362001 - [z-stream clone - 3.6.9] RunVm endAction throws NPE when starting VM 
from Pool
1367519 - VmsStatisticsFetcher excessive logging in engine.log (clone of bug 
1366138 for 3.6.9)
1369415 - [z-stream clone - 3.6.9] [InClusterUpgrade] Possible race condition 
with large amount of VMs in cluster
1369695 - [downstream clone - 3.6.9] password DWH_DB_PASSWORD not hidden
1372812 - [z-stream clone - 3.6.9] HA VMs are not restarted on different host 
if NonResponsive host is off and start action failed
1373090 - [downstream clone - 3.6.9] [Upgrade] Cluster compatibility upgrade 
3.6-> 4.0 failed on a specific system

6. Package List:

RHEV-M 3.6:

Source:
rhevm-3.6.9.2-0.1.el6.src.rpm

noarch:
rhevm-3.6.9.2-0.1.el6.noarch.rpm
rhevm-backend-3.6.9.2-0.1.el6.noarch.rpm
rhevm-dbscripts-3.6.9.2-0.1.el6.noarch.rpm
rhevm-extensions-api-impl-3.6.9.2-0.1.el6.noarch.rpm
rhevm-extensions-api-impl-javadoc-3.6.9.2-0.1.el6.noarch.rpm
rhevm-lib-3.6.9.2-0.1.el6.noarch.rpm
rhevm-restapi-3.6.9.2-0.1.el6.noarch.rpm
rhevm-setup-3.6.9.2-0.1.el6.noarch.rpm
rhevm-setup-base-3.6.9.2-0.1.el6.noarch.rpm
rhevm-setup-plugin-ovirt-engine-3.6.9.2-0.1.el6.noarch.rpm
rhevm-setup-plugin-ovirt-engine-common-3.6.9.2-0.1.el6.noarch.rpm
rhevm-setup-plugin-vmconsole-proxy-helper-3.6.9.2-0.1.el6.noarch.rpm
rhevm-setup-plugin-websocket-proxy-3.6.9.2-0.1.el6.noarch.rpm
rhevm-tools-3.6.9.2-0.1.el6.noarch.rpm
rhevm-tools-backup-3.6.9.2-0.1.el6.noarch.rpm
rhevm-userportal-3.6.9.2-0.1.el6.noarch.rpm
rhevm-userportal-debuginfo-3.6.9.2-0.1.el6.noarch.rpm
rhevm-vmconsole-proxy-helper-3.6.9.2-0.1.el6.noarch.rpm
rhevm-webadmin-portal-3.6.9.2-0.1.el6.noarch.rpm
rhevm-webadmin-portal-debuginfo-3.6.9.2-0.1.el6.noarch.rpm
rhevm-websocket-proxy-3.6.9.2-0.1.el6.noarch.rpm

These packages are GPG signed by Red Hat for security.  Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/

7. References:

https://access.redhat.com/security/cve/CVE-2016-4443
https://access.redhat.com/security/updates/classification/#moderate

8. Contact:

The Red Hat security contact is <secal...@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2016 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iD8DBQFX4vRsXlSAg2UNWIIRAlEiAJ0TwQ7tJh11JnjkiTe2eibRxdv3KQCeOFUo
7Af0AkKs6S5R6nzp4xbJxfw=
=4qS1
-----END PGP SIGNATURE-----


--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://www.redhat.com/mailman/listinfo/rhsa-announce

Reply via email to