Re: [ovirt-users] Performance of cloning

2017-09-28 Thread Nir Soffer
On Thu, Sep 28, 2017 at 1:39 PM Gianluca Cecchi 
wrote:

> On Thu, Sep 28, 2017 at 11:02 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a total
>> of about 200Gb to copy
>> The target I choose is on a different domain than the source one.
>> They are both FC storage domains, with the source on SSD disks and the
>> target on SAS disks.
>>
>> The disks are preallocated
>>
>> Now I have 3 processes of kind:
>> /usr/bin/qemu-img convert -p -t none -T none -f raw
>> /rhev/data-center/59b7af54-0155-01c2-0248-0195/fad05d79-254d-4f40-8201-360757128ede/images/8f62600a-057d-4d59-9655-631f080a73f6/21a8812f-6a89-4015-a79e-150d7e202450
>> -O raw
>> /rhev/data-center/mnt/blockSD/6911716c-aa99-4750-a7fe-f83675a2d676/images/c3973d1b-a168-4ec5-8c1a-630cfc4b66c4/27980581-5935-4b23-989a-4811f80956ca
>>
>> but despite capabilities it seems it is copying using very low system
>> resources.
>>
>> [snip]
>
>>
>> Is it expected? Any way to speed up the process?
>>
>> Thanks,
>> Gianluca
>>
>
> The cloning process elapsed was 101'
> The 3 disks are 85Gb, 20Gb and 80Gb so at the end an average of 30MB/s
>
At this moment I have only one host with self hosted engine VM running in
> this environment, planning to add another host in short time.
> So not yet configured power mgmt for fencing on it
> During the cloning I saw these kind of events
>
> Sep 28, 2017 10:31:30 AM VM vmclone1 creation was initiated by
> admin@internal-authz.
> Sep 28, 2017 11:16:38 AM VDSM command SetVolumeDescriptionVDS failed:
> Message timeout which can be caused by communication issues
>
> Sep 28, 2017 11:19:43 AM VDSM command SetVolumeDescriptionVDS failed:
> Message timeout which can be caused by communication issues
>

This looks like timeout on vdsm side, you can find more info in vdsm log
about these errors.


> Sep 28, 2017 11:19:43 AM Failed to update OVF disks
> 1504a878-4fe2-40df-a88f-6f073be0bd7b, 4ddac3ed-2bb9-485c-bf57-1750ac1fd761,
> OVF data isn't updated on those OVF stores (Data Center DC1, Storage Domain
> SDTEST).
>

Same


> At 11:24 I then start a pre-existing VM named benchvm and run a cpu / I/O
> benchmark (HammerDB with 13 concurrent users; the VM is configured with 12
> vcpus (1:6:2) and 64Gb of ram; it is not the one I'm cloning) that runs
> from 11:40 to 12:02
> Sep 28, 2017 11:24:29 AM VM benchvm started on Host host1
> Sep 28, 2017 11:45:18 AM Host host1 is not responding. Host cannot be
> fenced automatically because power management for the host is disabled.
>

Same


> Sep 28, 2017 11:45:28 AM Failed to update OVF disks
> 1504a878-4fe2-40df-a88f-6f073be0bd7b, 4ddac3ed-2bb9-485c-bf57-1750ac1fd761,
> OVF data isn't updated on those OVF stores (Data Center DC1, Storage Domain
> SDTEST).
>

Same


> Sep 28, 2017 11:45:39 AM Status of host host1 was set to Up.
> Sep 28, 2017 12:12:31 PM VM vmclone1 creation has been completed.
>
> Any hint on the failures detected, both when only the cloning process was
> in place and when a bench was running inside a VM?
>

I would check the host logs (vdsm and messages).


>
> Thanks,
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-28 Thread Ben Bradley

On 28/09/17 08:32, Yaniv Kaul wrote:



On Wed, Sep 27, 2017 at 10:59 PM, Ben Bradley > wrote:


Hi All

I'm looking to add a new host to my oVirt lab installation.
I'm going to share out some LVs from a separate box over iSCSI and
will hook the new host up to that.
I have 2 NICs on the storage host and 2 NICs on the new Ovirt host
to dedicate to the iSCSI traffic.
I also have 2 separate switches so I'm looking for redundancy here.
Both iSCSI host and oVirt host plugged into both switches.

If this was non-iSCSI traffic and without oVirt I would create
bonded interfaces in active-backup mode and layer the VLANs on top
of that.

But for iSCSI traffic without oVirt involved I wouldn't bother with
a bond and just use multipath.

 From scanning the oVirt docs it looks like there is an option to
have oVirt configure iSCSI multipathing.


Look for iSCSI bonding - that's the feature you are looking for.


Thanks for the replies.

By iSCSI bonding, do you mean the oVirt feature "iSCSI multipathing" as 
mentioned here 
https://www.ovirt.org/documentation/admin-guide/chap-Storage/ ?


Separate links seems to be the consensus then. Since these are links 
dedicated to iSCSI traffic, not shared. the ovirtmgmt bridge lives on 
top of an active-backup bond on other NICs.


Thanks, Ben


So what's the best/most-supported option for oVirt?
Manually create active-backup bonds so oVirt just sees a single
storage link between host and storage?
Or leave them as separate interfaces on each side and use oVirt's
multipath/bonding?

Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely
down to the fact I could use link-local addressing and not have to
worry about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI
supported by oVirt?


No, we do not. There has been some work in the area[1], but I'm not sure 
it is complete.

Y.

[1] 
https://gerrit.ovirt.org/#/q/status:merged+project:vdsm+branch:master+topic:ipv6-iscsi-target-support



Thanks, Ben
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 alpha upgrade failed

2017-09-28 Thread Maton, Brett
, it was the hosted engine itself.

  RAM 4096
  Max RAM 0

  I don't think I've touched the system settings on the hosted engine vm
itself though.

This (testlab) cluster was originally installed with 4.0 (I think) and gets
constantly upgraded with the pre-releases...

After a bit of faffing around to clean up leftovers of the failed upgrade,
I'm now running v4.2.0 and it's certianly different :)

  I did have one firefox browser that refused to load the dashboard
reporting 500 errors, but that appears to have been down to cached content
(on OS X).

  I look forward to playing with the new UI.

Thanks,
Brett

On 28 September 2017 at 19:29, Maton, Brett 
wrote:

> Thanks Tomas,
>
>   I'm restoring backup at the moment, I'll let you know how it goes when
> on the next attempt.
>
> On 28 September 2017 at 18:46, Tomas Jelinek  wrote:
>
>> Hey Brett,
>>
>> That is strange - it looks like you have some VM which has memory size
>> larger than the max memory size.
>>
>> You need to go over your VMs / templates to find which one has this wrong
>> config and change it.
>> Alternatively, to find it faster if you have many vms/templates, you
>> could run this SQL query against your engine database:
>> select vm_name from vm_static where mem_size_mb > max_memory_size_mb;
>>
>> Tomas
>>
>> On Thu, Sep 28, 2017 at 6:07 PM, Maton, Brett 
>> wrote:
>>
>>> Upgrading from oVirt 4.1.7
>>>
>>> hosted-engine VM:
>>> 4GB RAM
>>>
>>> hosted-engine setup failed, setup log shows this error:
>>>
>>> Running upgrade sql script '/usr/share/ovirt-engine/dbscr
>>> ipts/upgrade/04_02_0140_add_max_memory_constraint.sql'...
>>>
>>> 2017-09-28 16:56:22,951+0100 DEBUG 
>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
>>> plugin.execute:926 execute-output: 
>>> ['/usr/share/ovirt-engine/dbscripts/schema.sh',
>>> '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l',
>>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20170928164338-0rkilb.log',
>>> '-c', 'apply'] stderr:
>>> psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql:2:
>>> ERROR:  check constraint "vm_static_max_memory_size_lower_bound" is
>>> violated by some row
>>> FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine
>>> /dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
>>>
>>> 2017-09-28 16:56:22,951+0100 ERROR 
>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
>>> schema._misc:374 schema.sh: FATAL: Cannot execute sql command:
>>> --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_
>>> add_max_memory_constraint.sql
>>> 2017-09-28 16:56:22,952+0100 DEBUG otopi.context
>>> context._executeMethod:143 method exception
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133,
>>> in _executeMethod
>>> method['method']()
>>>   File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-s
>>> etup/ovirt-engine/db/schema.py", line 376, in _misc
>>> raise RuntimeError(_('Engine schema refresh failed'))
>>> RuntimeError: Engine schema refresh failed
>>>
>>>
>>>
>>> What's the minimum RAM required now ?
>>>
>>> Regards,
>>> Brett
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 alpha upgrade failed

2017-09-28 Thread Maton, Brett
Thanks Tomas,

  I'm restoring backup at the moment, I'll let you know how it goes when on
the next attempt.

On 28 September 2017 at 18:46, Tomas Jelinek  wrote:

> Hey Brett,
>
> That is strange - it looks like you have some VM which has memory size
> larger than the max memory size.
>
> You need to go over your VMs / templates to find which one has this wrong
> config and change it.
> Alternatively, to find it faster if you have many vms/templates, you could
> run this SQL query against your engine database:
> select vm_name from vm_static where mem_size_mb > max_memory_size_mb;
>
> Tomas
>
> On Thu, Sep 28, 2017 at 6:07 PM, Maton, Brett 
> wrote:
>
>> Upgrading from oVirt 4.1.7
>>
>> hosted-engine VM:
>> 4GB RAM
>>
>> hosted-engine setup failed, setup log shows this error:
>>
>> Running upgrade sql script '/usr/share/ovirt-engine/dbscr
>> ipts/upgrade/04_02_0140_add_max_memory_constraint.sql'...
>>
>> 2017-09-28 16:56:22,951+0100 DEBUG 
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
>> plugin.execute:926 execute-output: 
>> ['/usr/share/ovirt-engine/dbscripts/schema.sh',
>> '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l',
>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20170928164338-0rkilb.log',
>> '-c', 'apply'] stderr:
>> psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql:2:
>> ERROR:  check constraint "vm_static_max_memory_size_lower_bound" is
>> violated by some row
>> FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine
>> /dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
>>
>> 2017-09-28 16:56:22,951+0100 ERROR 
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
>> schema._misc:374 schema.sh: FATAL: Cannot execute sql command:
>> --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_
>> add_max_memory_constraint.sql
>> 2017-09-28 16:56:22,952+0100 DEBUG otopi.context
>> context._executeMethod:143 method exception
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
>> _executeMethod
>> method['method']()
>>   File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-
>> setup/ovirt-engine/db/schema.py", line 376, in _misc
>> raise RuntimeError(_('Engine schema refresh failed'))
>> RuntimeError: Engine schema refresh failed
>>
>>
>>
>> What's the minimum RAM required now ?
>>
>> Regards,
>> Brett
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 alpha upgrade failed

2017-09-28 Thread Tomas Jelinek
Hey Brett,

That is strange - it looks like you have some VM which has memory size
larger than the max memory size.

You need to go over your VMs / templates to find which one has this wrong
config and change it.
Alternatively, to find it faster if you have many vms/templates, you could
run this SQL query against your engine database:
select vm_name from vm_static where mem_size_mb > max_memory_size_mb;

Tomas

On Thu, Sep 28, 2017 at 6:07 PM, Maton, Brett 
wrote:

> Upgrading from oVirt 4.1.7
>
> hosted-engine VM:
> 4GB RAM
>
> hosted-engine setup failed, setup log shows this error:
>
> Running upgrade sql script '/usr/share/ovirt-engine/
> dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql'...
>
> 2017-09-28 16:56:22,951+0100 DEBUG 
> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
> plugin.execute:926 execute-output: 
> ['/usr/share/ovirt-engine/dbscripts/schema.sh',
> '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l',
> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20170928164338-0rkilb.log',
> '-c', 'apply'] stderr:
> psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_
> add_max_memory_constraint.sql:2: ERROR:  check constraint
> "vm_static_max_memory_size_lower_bound" is violated by some row
> FATAL: Cannot execute sql command: --file=/usr/share/ovirt-
> engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
>
> 2017-09-28 16:56:22,951+0100 ERROR 
> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
> schema._misc:374 schema.sh: FATAL: Cannot execute sql command:
> --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_
> 02_0140_add_max_memory_constraint.sql
> 2017-09-28 16:56:22,952+0100 DEBUG otopi.context
> context._executeMethod:143 method exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
> _executeMethod
> method['method']()
>   File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-
> engine-setup/ovirt-engine/db/schema.py", line 376, in _misc
> raise RuntimeError(_('Engine schema refresh failed'))
> RuntimeError: Engine schema refresh failed
>
>
>
> What's the minimum RAM required now ?
>
> Regards,
> Brett
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.2 alpha upgrade failed

2017-09-28 Thread Maton, Brett
Upgrading from oVirt 4.1.7

hosted-engine VM:
4GB RAM

hosted-engine setup failed, setup log shows this error:

Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql'...

2017-09-28 16:56:22,951+0100 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926
execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s',
'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l',
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20170928164338-0rkilb.log',
'-c', 'apply'] stderr:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql:2:
ERROR:  check constraint "vm_static_max_memory_size_lower_bound" is
violated by some row
FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql

2017-09-28 16:56:22,951+0100 ERROR
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:374
schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
2017-09-28 16:56:22,952+0100 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
  File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py",
line 376, in _misc
raise RuntimeError(_('Engine schema refresh failed'))
RuntimeError: Engine schema refresh failed



What's the minimum RAM required now ?

Regards,
Brett
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] changing ip of host and its ovirtmgmt vlan

2017-09-28 Thread Alona Kaplan
On Wed, Sep 27, 2017 at 5:48 PM, Gianluca Cecchi 
wrote:

> On Wed, Sep 27, 2017 at 4:25 PM, Michael Burman 
> wrote:
>
>> Hello Gianluca,
>>
>> Not sure i fully understood, but, if the host's IP has changes and the
>> vlan then the correct flow will be:
>>
>> 1) Remove the host
>> 2) Edit the management network with vlan tag - the vlan you need/want
>> 3) Add/install the host - make sure you using the correct/new IP(if using
>> IP) or the correct FQDN(if has changed).
>>
>> Note that doing things manually on the host such as changing ovirtmgmt's
>> configuration without the engine or vdsm may cause problems and will not
>> persist the changes during reboots. If the host's IP has changed or it's
>> FQDN then you must install the host again.
>>
>> Cheers)
>>
>
> Original situation was:
>
> 1 DC
> 1 Cluster: CLA with 2 hosts: host1 and host2
> ovirtmgmt defined on vlan10
>
> engine is an external server on VLAN5 that can reach VLAN10 of hosts
> So far so good
>
> Need to add another host that is in another physical server room. Here
> VLAN10 is not present, so I cannot set ovirtmgmt
> If I understand correctly, the VLAN assigned to ovirtmgmt is a DC
> property: I cannot have different vlans assigned to ovirtmgmt in different
> clusters of the same DC, correct?
>
> So the path:
> Create a second cluster CLB and define on it the logical network
> ovirtmgmt2 on VLAN20 and set it as the mgmt network for that cluster
> Add the new host host3 to CLB.
>
> So far so good: the engine on VLAN5 is able to manage the hosts of CLA ad
> CLB with their mgmt networks in VLAN10 and VLAN20
>
> Now it is decided to create a new VLAN30 that is transportable across the
> the 2 physical locations and to have host1, host2, host3 to be part of a
> new CLC cluster where the mgmt network is now on VLAN30
>
> Can I simplify operations, as many VMs are already in place in CLA and CLB?
> So the question arises:
>
> the 3 hosts were added originally using their dns hostname and not their
> IP address.
> Can I change my dns settings so that the engine resolves the hostnames
> with the new IPs and change vlan of ovirtmgmt?
>

You can -
1. Move the host to maintenance mode.
2. Change the cluster of the hosts to the new one.
3. Running setupNetworks + save network configuration directly on the host
removing the old management network and configuring the new one (with new
vlan and ip).
 Petr, can you please provide the syntax of the command?

* You may first try doing this step via the ui. Make sure you uncheck
the 'Verify
connectivity between Host and Engine' checkbox in the Setup Networks window.
I'm not sure it will work, maybe the engine will block it since you're
trying to touch the nic with the management ip.

4. Change the dns setting to resolve to the new IPs.

>
> And if I decide to start from scratch with this new cluster CLC on VLAN30,
> can I retain my old 3 hostnames (resolving to their new IPs)? How?
>

You can -
1. remove the host from the original clusters
2. remove all the network from the host using Petr's vdsm tool (
https://gerrit.ovirt.org/#/c/79495/)
3. change the dns setting to resolve to the new IPs.
4. Add the host to the new cluster.


>
> Hope I was able to clarify a bit the scenario
>
> Gianluca
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 First Alpha Release is now available for testing

2017-09-28 Thread Gianluca Cecchi
On Thu, Sep 28, 2017 at 5:06 PM, Sandro Bonazzola 
wrote:

> The oVirt Project is pleased to announce the availability of the First
> Alpha Release of oVirt 4.2.0, as of September 28th, 2017
>
>
>
Good news!
Any chance of having ISO and Export domains on storage types that are not
NFS in 4.2?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.2.0 First Alpha Release is now available for testing

2017-09-28 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Alpha Release of oVirt 4.2.0, as of September 28th, 2017

This is pre-release software. This pre-release should not to be used in
production.

Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].

This update is the first alpha release of the 4.2.0 version. This release
brings more than 120 enhancements and more than 670 bug fixes, including
more than 260 high or urgent severity fixes, on top of oVirt 4.1 series.

What's new in oVirt 4.2.0?


   -

   The Administration Portal has been completely redesigned using
   Patternfly, a widely adopted standard in web application design. It now
   features a cleaner, more intuitive design, for an improved user experience.
   -

   There is an all-new VM Portal for non-admin users.
   -

   A new High Performance virtual machine type has been added to the New VM
   dialog box in the Administration Portal.
   -

   Open Virtual Network (OVN) adds support for Open vSwitch software
   defined networking (SDN).
   -

   oVirt now supports Nvidia vGPU.
   -

   The ovirt-ansible-roles package helps users with common administration
   tasks.
   -

   Virt-v2v now supports Debian/Ubuntu based VMs.


For more information about these and other features, check out the oVirt
4.2.0 blog post .

This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later

* oVirt Node 4.2 (available for x86_64 only)

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:

- oVirt Appliance is already available.

- An async release of oVirt Node will follow soon.

Additional Resources:

* Read more about the oVirt 4.2.0 release highlights:
http://www.ovirt.org/release/4.2.0/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/

[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt

[3] http://www.ovirt.org/release/4.2.0/

[4] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ovirt-announce][ANN] oVirt 4.1.7 Second Release Candidate is now available

2017-09-28 Thread Lev Veyde
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.1.7, as of September 28th, 2017

This update is the seventh in a series of stabilization updates to the 4.1
series.

Starting from 4.1.5 oVirt supports libgfapi [5]. Using libgfapi provides a
real performance boost for ovirt when using GlusterFS .
Due  to a known issue [6], using this will break live storage migration.
This is expected to be fixed soon. If you do not use live storage
migration you can give it a try. Use [7] for more details on how to  enable
it.

This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.1

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Live is already available[4]
- oVirt Node is already available[4]

Additional Resources:
* Read more about the oVirt 4.1.7 release highlights:
http://www.ovirt.org/release/4.1. 7
/ 
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1. 7
/ 
[4] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
[5]
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/
[6] https://bugzilla.redhat.com/show_bug.cgi?id=1306562
[7]
http://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/


Thanks in advance,
-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host cannot connect to hosted storage domain

2017-09-28 Thread Alexander Witte
What is the correct procedure to change the hosted_storage NFS path?

Right now:
localhost:/shares
Change to:
menmaster.traindemo.local:/shares

1) Put VM in global maintenance
2) Shutdown VM
3) Edit /etc/ovirt-hosted-engine/hosted-engine.conf
4) Restart VM
5) Exit global maintenance

Is this correct?

I think having the localhost in the storage domain path is preventing hosts 
being added to the oVirt datacenter object.

Thanks,

Alex Witte


[cid:image001.gif@01D0D506.FAB96B00]

On Sep 27, 2017, at 11:23 PM, Alexander Witte 
> wrote:

OK after a host reboot I was able to get the Engine VM up again and into the 
Web interface.  Although continually when I try a second host to the datacenter 
within oVirt I run into the error:

"Host mennode2 cannot access the Storage Domain(s) hosted_storage attached to 
the Data Center Train1.  Setting Host state to Non Operational."

Note:  I can successfully read (and mount) the NFS exports oVirt is complaining 
about:

[root@mennode2 ~]# showmount -e menmaster.traindemo.local
Export list for menmaster.traindemo.local:
/shares *
/shares/exports *
/shares/data*
/shares/isos*
[root@mennode2 ~]#

[root@mennode2 tmp]# mount -t nfs menmaster.traindemo.local:/shares test
[root@mennode2 tmp]# cd test
[root@mennode2 test]# ls
7d18ff24-57a3-4b4a-9934-0263191fe2e4  data  __DIRECT_IO_TEST__  exports  isos
[root@mennode2 test]#

One thing I DO notice is the path of my exports is the difference in the path 
for the hosted_storage domain.  I wonder if the second host would have issues 
resolving this?

Data. ==> menmaster.traindemo.local:/shares/data
Export  ==> menmaster.traindemo.local:/shares/exports
Hosted_storage. ==> localhost:/shares
ISO. ==> menmaster.traindemo.local:/shares/exports


Below I have copied exports from the VDSM and OVIRT-ENGINE log and have copied 
the output of the hosted-engine.conf file.  Any help in pinpointing the source 
of the problem is greatly appreciated!!


VSDM logs:

2017-09-27 23:10:01,700-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call 
Host.getStats succeeded in 0.06 seconds (__init__:539)
2017-09-27 23:10:03,104-0400 INFO  (jsonrpc/5) [vdsm.api] START 
getSpmStatus(spUUID=u'59c7f8f3-0063-00a8-02c7-00f3', options=None) 
from=:::10.0.0.227,39748, flow_id=19e2dbb3, 
task_id=7a05a5bd-9b15-43be-890d-c6f5d7650e5c (api:46)
2017-09-27 23:10:03,109-0400 INFO  (jsonrpc/5) [vdsm.api] FINISH getSpmStatus 
return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 8L}} 
from=:::10.0.0.227,39748, flow_id=19e2dbb3, 
task_id=7a05a5bd-9b15-43be-890d-c6f5d7650e5c (api:52)
2017-09-27 23:10:03,110-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:539)
2017-09-27 23:10:03,189-0400 INFO  (jsonrpc/7) [vdsm.api] START 
getStoragePoolInfo(spUUID=u'59c7f8f3-0063-00a8-02c7-00f3', 
options=None) from=:::10.0.0.227,39866, flow_id=19e2dbb3, 
task_id=0f8b49e9-9d82-457e-a2a2-39dc7ed9f022 (api:46)
2017-09-27 23:10:03,196-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH 
getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': 
u'/rhev/data-center/mnt/menmaster.traindemo.local:_shares_isos/da001a29-eca5-44d6-a097-129dd9be623f/images/----',
 'pool_status': 'connected', 'lver': 8L, 'domains': 
u'da001a29-eca5-44d6-a097-129dd9be623f:Active,f36157cc-b25a-400a-ab0f-a071e8a8eea7:Active,7d18ff24-57a3-4b4a-9934-0263191fe2e4:Active,795d4a1d-3ceb-4773-99de-8e7cf05112f3:Active',
 'master_uuid': u'f36157cc-b25a-400a-ab0f-a071e8a8eea7', 'version': '4', 
'spm_id': 1, 'type': 'NFS', 'master_ver': 1}, 'dominfo': 
{u'da001a29-eca5-44d6-a097-129dd9be623f': {'status': u'Active', 'diskfree': 
'1044166737920', 'isoprefix': 
u'/rhev/data-center/mnt/menmaster.traindemo.local:_shares_isos/da001a29-eca5-44d6-a097-129dd9be623f/images/----',
 'alerts': [], 'disktotal': '1049702170624', 'version': 0}, 
u'f36157cc-b25a-400a-ab0f-a071e8a8eea7': {'status': u'Active', 'diskfree': 
'1044166737920', 'isoprefix': '', 'alerts': [], 'disktotal': '1049702170624', 
'version': 4}, u'7d18ff24-57a3-4b4a-9934-0263191fe2e4': {'status': u'Active', 
'diskfree': '1044166737920', 'isoprefix': '', 'alerts': [], 'disktotal': 
'1049702170624', 'version': 4}, u'795d4a1d-3ceb-4773-99de-8e7cf05112f3': 
{'status': u'Active', 'diskfree': '1044166737920', 'isoprefix': '', 'alerts': 
[], 'disktotal': '1049702170624', 'version': 0}}} from=:::10.0.0.227,39866, 
flow_id=19e2dbb3, task_id=0f8b49e9-9d82-457e-a2a2-39dc7ed9f022 (api:52)
2017-09-27 23:10:03,198-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.getInfo succeeded in 0.01 seconds (__init__:539)
2017-09-27 23:10:04,212-0400 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:35980 
(protocoldetector:72)
2017-09-27 23:10:04,223-0400 INFO  (Reactor thread) 

Re: [ovirt-users] Renaming or deleting ovirtmgmt

2017-09-28 Thread Alona Kaplan
On Thu, Sep 28, 2017 at 3:03 PM, Michael Burman  wrote:

> When you delete the host, all files remain as they were before. Nothing
> changes.
> I have tested this flow today and this is the result:
>
> 1) For example i have host installed with ovirtmgmt as my management and i
> want to change the management network of the cluster or to add to a new
> cluster with different management network.
>
> 2) Follow the steps create another network as the management network(as
> explained in the mail or feature page). Make sure to check the 'default
> route' property as well! and make ovirtmgmt as non-required network as well.
>
> 3) Remove the host - no need to remove any files or packages
>
> 4) Add the host to the new cluster with the new management network.
>
> 5) What will happen is that engine will fail to configure the new
> management network on the host(and become non-operational), but it is
> easily can be work around by going to the setup networks dialog and switch
> between the networks, by detaching ovirtmgmt and attaching the new
> management network. After pressing OK the new management network will be
> saved on the host and the new network configuration will be applied on the
> host.
>
> 6) After this step host still remain as non-operational, all we need to do
> now is to 'Activate' the host  and host become operational once again with
> the new network changes will take place. All files will be update
> successfully on the host.
>
> - So it looks like we not handling such scenario smoothly and maybe this
> should be fixed.
> If we had a host running with 1 management network and we want to add this
> host to a different cluster with a different management network engine and
> vdsm should be able to take care of it during the installation of the host
> and to successfully make all the required changes.
>
> Dan , Alona what do you think? should we handle such use case? or this is
> expected?
>

Pert Horacek recently wrote a vdsm-tool to remove networks from a host that
is no longer used by oVirt.
https://gerrit.ovirt.org/#/c/79495/ - link to Petr's code.

It will be officially available on 4.2.


>
>
>
>
>
> On Wed, Sep 27, 2017 at 7:24 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>>
>>
>> On Wed, Sep 27, 2017 at 4:32 PM, Michael Burman 
>> wrote:
>>
>>> I was referring to situation in which the host was there before, but you
>>> need to re-install it to the new cluster with the new management network.
>>>
>>> No package removing is needed at all. Why would you do steps 2-4? no
>>> need.
>>> Step 5 should work.
>>>
>>> I'm not sure i understand what you mean by ' still I cannot remove the
>>> default ovirtmgmt network from that cluster'??
>>> Do you want to detach it from the cluster?
>>> Do you want to remove it from the DC?
>>> Note that you can change the management network role only if the cluster
>>> has no hosts in it.
>>>
>>>
>>>
>> I will try this way.
>> Normally in my setups the ip on the mgmt network of the host is also the
>> ip corresponding to its hostname
>> When I delete a host from engine, what happens in relation to its network
>> files in /etc/sysconfig/network-scripts/ directory and the contents of
>> /var/lib/vdsm/ directory and its subdirectories?
>>
>>
>
>
>
> --
>
> Michael Burman
>
> Quality engineer - rhv network - redhat israel
>
> Red Hat
>
> 
>
> mbur...@redhat.comM: 0545355725 IM: mburman
> 
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Huge pages in guest with newer oVirt versions

2017-09-28 Thread Gianluca Cecchi
Sorry for late reply, I had not time to give feedback until now

On Mon, Sep 18, 2017 at 10:33 PM, Arik Hadas  wrote:

>
>
> On Mon, Sep 18, 2017 at 10:50 PM, Martin Polednik 
> wrote:
>
>> The hugepages are no longer a hook, but part of the code base. They
>
> can be configured via engine property `hugepages`, where the value of
>> property is size of the pages in KiB (1048576 = 1G, 2048 = 2M).
>>
>
> Note that the question is about 4.1 and it doesn't seem like this change
> was backported to the 4.1 branch, right?
>

And in fact it seems I have not this in 4.1.5 engine:

# engine-config -l | grep -i huge
#

In case it is ok for upcoming 4.2/master, how am I supposed to use it? I
would like to use hugepages at VM level, not engine.
Or do you mean that in 4.2 if I set it and specify 2M for the engine
parameter named "hugepages", then automatically I will see a custom
property inside the VM config section, or where?
Any screenshot of this?

In the mean time I'm using the "old" style with the hook I found here:
http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/
vdsm-hook-qemucmdline-4.19.31-1.el7.centos.noarch.rpm
and
vdsm-hook-hugepages-4.19.31-1.el7.centos.noarch.rpm

It works but it seems not to be correctly integrated with what the hosts
sees...
an example
On hypervisor I set 9 huge pages

In 3 VMs I want to configure 34Gb of Huge Pages and total memory of 64Gb,
so I set 17408 in their Huge Pages custom property
Before starting any VM on hypervisor I see

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   9
HugePages_Free:9
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

When I start the first VM there is the first anomaly:
It becomes:
# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   107408
HugePages_Free:74640
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

So apparently it allocates 17408 further huge pages, without using the part
of the 9 it already has free.
But I think this is actually a bug in what /proc shows and not real usage
(see below) perhaps?
Also, it seems it has allocated 64Gb, the entire size of the VM memory and
not only the 34Gb part...
I don't know if this is correct and in case expected... because eventually
I can choose to increase the number of huge pages of the VM..

Inside the VM vm1 itself it seems correct view:
[root@vm1 ~]# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   17408
HugePages_Free:17408
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

Note that if I run again on host:
# sysctl -p /etc/sysctl.d/10-huge-pages.conf

it seems it adjusts itself.. decreasing the total huge pages that in theory
it is not possible...?

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   9
HugePages_Free:57232
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

Again it seems it has allocated 32768 huge pages so 64Gb that is the total
memory of the VM,
I start now the second VM vm2:

At hypervisor level I have now:

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   107408
HugePages_Free:41872
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

So again an increment of 17408 huge pages in the total line and a new
allocation of 64Gb of huge pages (total huge pages allocated 32768+32768)

BTW now the free output on host shows:
# free
  totalusedfree  shared  buff/cache
 available
Mem:  264016436   23310582029194036  190460 1716580
29747272
Swap:   4194300   0 4194300

with "only" 29Gb free and if I try to run the third VM vm3 I get in fact
the error message:

"
Error while executing action:

vm3:

   - Cannot run VM. There is no host that satisfies current scheduling
   constraints. See below for details:
   - The host ovirt1 did not satisfy internal filter Memory because its
   available memory is too low (33948 MB) to run the VM.

"
Again I run on host:
# sysctl -p /etc/sysctl.d/10-huge-pages.conf

The memory situation on host becomes:

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   9
HugePages_Free:24464
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

# free
  totalusedfree  shared  buff/cache
 available
Mem:  264016436   19745474064844616  190460 1717080
65398696
Swap:   4194300   0 4194300
[root@rhevora1 downloaded_from_upstream]#

And I can boot now the third VM vm3, with the memory ouput on host becoming:

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   107408
HugePages_Free: 9104
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

# free

Re: [ovirt-users] Performance of cloning

2017-09-28 Thread Kevin Wolf
Am 28.09.2017 um 12:44 hat Nir Soffer geschrieben:
> On Thu, Sep 28, 2017 at 12:03 PM Gianluca Cecchi 
> wrote:
> 
> > Hello,
> > I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a total
> > of about 200Gb to copy
> > The target I choose is on a different domain than the source one.
> > They are both FC storage domains, with the source on SSD disks and the
> > target on SAS disks.
> >
> > The disks are preallocated
> >
> > Now I have 3 processes of kind:
> > /usr/bin/qemu-img convert -p -t none -T none -f raw
> > /rhev/data-center/59b7af54-0155-01c2-0248-0195/fad05d79-254d-4f40-8201-360757128ede/images/8f62600a-057d-4d59-9655-631f080a73f6/21a8812f-6a89-4015-a79e-150d7e202450
> > -O raw
> > /rhev/data-center/mnt/blockSD/6911716c-aa99-4750-a7fe-f83675a2d676/images/c3973d1b-a168-4ec5-8c1a-630cfc4b66c4/27980581-5935-4b23-989a-4811f80956ca
> >
> > but despite capabilities it seems it is copying using very low system
> > resources.
> >
> 
> We run qemu-img convert (and other storage related commands) with:
> 
> nice -n 19 ionice -c 3 qemu-img ...
> 
> ionice should not have any effect unless you use the CFQ I/O scheduler.
> 
> The intent is to limit the effect of virtual machines.
> 
> 
> > I see this both using iotop and vmstat
> >
> > vmstat 3 gives:
> > io -system-- --cpu-
> > bibo   in   cs us sy id wa st
> > 2527   698 3771 29394  1  0 89 10  0
> >
> 
> us 94% also seems very high - maybe this hypervisor is overloaded with
> other workloads?
> wa 89% seems very high

The alignment in the table is a bit off, but us is 1%. The 94 you saw is
part of cs=29394. A high percentage for wait is generally a good sign
because that means that the system is busy with actual I/O work.
Obviously, this I/O work is rather slow, but at least qemu-img is making
requests to the kernel instead of doing other work, otherwise user would
be much higher.

Kevin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Performance of cloning

2017-09-28 Thread Gianluca Cecchi
On Thu, Sep 28, 2017 at 2:34 PM, Kevin Wolf  wrote:

> Am 28.09.2017 um 12:44 hat Nir Soffer geschrieben:
> > On Thu, Sep 28, 2017 at 12:03 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com>
> > wrote:
> >
> > > Hello,
> > > I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a
> total
> > > of about 200Gb to copy
> > > The target I choose is on a different domain than the source one.
> > > They are both FC storage domains, with the source on SSD disks and the
> > > target on SAS disks.
>


> [snip]
>


> > >
> > > but despite capabilities it seems it is copying using very low system
> > > resources.
> > >
> >
> > We run qemu-img convert (and other storage related commands) with:
> >
> > nice -n 19 ionice -c 3 qemu-img ...
> >
> > ionice should not have any effect unless you use the CFQ I/O scheduler.
> >
> > The intent is to limit the effect of virtual machines.
> >
>

Ah, ok.
The hypervisor is ovirt node based on CentOS 7 so the default scheduler
should be deadline if not customized in node.
And in fact in /sys/block/sd*/queue/scheduler I see only [deadline]
contents and also for dm-* block devices where it is not none, it is
deadline too



> >
> > > I see this both using iotop and vmstat
> > >
> > > vmstat 3 gives:
> > > io -system-- --cpu-
> > > bibo   in   cs us sy id wa st
> > > 2527   698 3771 29394  1  0 89 10  0
> > >
> >
> > us 94% also seems very high - maybe this hypervisor is overloaded with
> > other workloads?
> > wa 89% seems very high
>
> The alignment in the table is a bit off, but us is 1%. The 94 you saw is
> part of cs=29394. A high percentage for wait is generally a good sign
> because that means that the system is busy with actual I/O work.
> Obviously, this I/O work is rather slow, but at least qemu-img is making
> requests to the kernel instead of doing other work, otherwise user would
> be much higher.
>
> Kevin
>


Yes, probably a misalignement of output I truncated. Actually a sampling of
about 300 lines (once every 3 seconds) shows these number of lines
occurrences and related percentage

user time
195 0%
 95 1%

So user time is indeed quite low

wait time:
105 7%
 58 8%
 33 9%
 21 6%
 17 10%
 16 5%
 16%
 12 11%
  9 12%
  7 14%
  6 13%
  2 15%
  1 4%
  1 16%
  1 0%

with wait time an average of 7%

AT the end the overall performance in copying has been around 30MB/s that
probably is to be expected do to how the qemu-img process is run.
What about the events I reported instead?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-28 Thread Yaniv Kaul
On Thu, Sep 28, 2017 at 1:54 PM, Nir Soffer  wrote:

>
>
> On Wed, Sep 27, 2017 at 11:01 PM Ben Bradley  wrote:
>
>> Hi All
>>
>> I'm looking to add a new host to my oVirt lab installation.
>> I'm going to share out some LVs from a separate box over iSCSI and will
>> hook the new host up to that.
>> I have 2 NICs on the storage host and 2 NICs on the new Ovirt host to
>> dedicate to the iSCSI traffic.
>> I also have 2 separate switches so I'm looking for redundancy here. Both
>> iSCSI host and oVirt host plugged into both switches.
>>
>> If this was non-iSCSI traffic and without oVirt I would create bonded
>> interfaces in active-backup mode and layer the VLANs on top of that.
>>
>> But for iSCSI traffic without oVirt involved I wouldn't bother with a
>> bond and just use multipath.
>>
>>  From scanning the oVirt docs it looks like there is an option to have
>> oVirt configure iSCSI multipathing.
>>
>> So what's the best/most-supported option for oVirt?
>>
>
> oVirt support only multipath devices, so the best way is to use multipath
> features.
>
>
>> Manually create active-backup bonds so oVirt just sees a single storage
>> link between host and storage?
>>
>
> This will always be on top of multipath device, giving you the same
> capabilities, so why would you want to do that?
>

If the link is not only for storage traffic, then it makes sense to use
bonding.
Active-Active is much better, of course, if possible.
Y.


>
>
>> Or leave them as separate interfaces on each side and use oVirt's
>> multipath/bonding?
>>
>
> Yes.
>
>
>>
>> Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely down
>> to the fact I could use link-local addressing and not have to worry
>> about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI supported
>> by oVirt?
>>
>> Thanks, Ben
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Renaming or deleting ovirtmgmt

2017-09-28 Thread Michael Burman
When you delete the host, all files remain as they were before. Nothing
changes.
I have tested this flow today and this is the result:

1) For example i have host installed with ovirtmgmt as my management and i
want to change the management network of the cluster or to add to a new
cluster with different management network.

2) Follow the steps create another network as the management network(as
explained in the mail or feature page). Make sure to check the 'default
route' property as well! and make ovirtmgmt as non-required network as well.

3) Remove the host - no need to remove any files or packages

4) Add the host to the new cluster with the new management network.

5) What will happen is that engine will fail to configure the new
management network on the host(and become non-operational), but it is
easily can be work around by going to the setup networks dialog and switch
between the networks, by detaching ovirtmgmt and attaching the new
management network. After pressing OK the new management network will be
saved on the host and the new network configuration will be applied on the
host.

6) After this step host still remain as non-operational, all we need to do
now is to 'Activate' the host  and host become operational once again with
the new network changes will take place. All files will be update
successfully on the host.

- So it looks like we not handling such scenario smoothly and maybe this
should be fixed.
If we had a host running with 1 management network and we want to add this
host to a different cluster with a different management network engine and
vdsm should be able to take care of it during the installation of the host
and to successfully make all the required changes.

Dan , Alona what do you think? should we handle such use case? or this is
expected?





On Wed, Sep 27, 2017 at 7:24 PM, Gianluca Cecchi 
wrote:

>
>
> On Wed, Sep 27, 2017 at 4:32 PM, Michael Burman 
> wrote:
>
>> I was referring to situation in which the host was there before, but you
>> need to re-install it to the new cluster with the new management network.
>>
>> No package removing is needed at all. Why would you do steps 2-4? no need.
>> Step 5 should work.
>>
>> I'm not sure i understand what you mean by ' still I cannot remove the
>> default ovirtmgmt network from that cluster'??
>> Do you want to detach it from the cluster?
>> Do you want to remove it from the DC?
>> Note that you can change the management network role only if the cluster
>> has no hosts in it.
>>
>>
>>
> I will try this way.
> Normally in my setups the ip on the mgmt network of the host is also the
> ip corresponding to its hostname
> When I delete a host from engine, what happens in relation to its network
> files in /etc/sysconfig/network-scripts/ directory and the contents of
> /var/lib/vdsm/ directory and its subdirectories?
>
>



-- 

Michael Burman

Quality engineer - rhv network - redhat israel

Red Hat



mbur...@redhat.comM: 0545355725 IM: mburman

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-28 Thread Nir Soffer
On Wed, Sep 27, 2017 at 11:01 PM Ben Bradley  wrote:

> Hi All
>
> I'm looking to add a new host to my oVirt lab installation.
> I'm going to share out some LVs from a separate box over iSCSI and will
> hook the new host up to that.
> I have 2 NICs on the storage host and 2 NICs on the new Ovirt host to
> dedicate to the iSCSI traffic.
> I also have 2 separate switches so I'm looking for redundancy here. Both
> iSCSI host and oVirt host plugged into both switches.
>
> If this was non-iSCSI traffic and without oVirt I would create bonded
> interfaces in active-backup mode and layer the VLANs on top of that.
>
> But for iSCSI traffic without oVirt involved I wouldn't bother with a
> bond and just use multipath.
>
>  From scanning the oVirt docs it looks like there is an option to have
> oVirt configure iSCSI multipathing.
>
> So what's the best/most-supported option for oVirt?
>

oVirt support only multipath devices, so the best way is to use multipath
features.


> Manually create active-backup bonds so oVirt just sees a single storage
> link between host and storage?
>

This will always be on top of multipath device, giving you the same
capabilities, so why would you want to do that?


> Or leave them as separate interfaces on each side and use oVirt's
> multipath/bonding?
>

Yes.


>
> Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely down
> to the fact I could use link-local addressing and not have to worry
> about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI supported
> by oVirt?
>
> Thanks, Ben
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Performance of cloning

2017-09-28 Thread Nir Soffer
On Thu, Sep 28, 2017 at 12:03 PM Gianluca Cecchi 
wrote:

> Hello,
> I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a total
> of about 200Gb to copy
> The target I choose is on a different domain than the source one.
> They are both FC storage domains, with the source on SSD disks and the
> target on SAS disks.
>
> The disks are preallocated
>
> Now I have 3 processes of kind:
> /usr/bin/qemu-img convert -p -t none -T none -f raw
> /rhev/data-center/59b7af54-0155-01c2-0248-0195/fad05d79-254d-4f40-8201-360757128ede/images/8f62600a-057d-4d59-9655-631f080a73f6/21a8812f-6a89-4015-a79e-150d7e202450
> -O raw
> /rhev/data-center/mnt/blockSD/6911716c-aa99-4750-a7fe-f83675a2d676/images/c3973d1b-a168-4ec5-8c1a-630cfc4b66c4/27980581-5935-4b23-989a-4811f80956ca
>
> but despite capabilities it seems it is copying using very low system
> resources.
>

We run qemu-img convert (and other storage related commands) with:

nice -n 19 ionice -c 3 qemu-img ...

ionice should not have any effect unless you use the CFQ I/O scheduler.

The intent is to limit the effect of virtual machines.


> I see this both using iotop and vmstat
>
> vmstat 3 gives:
> io -system-- --cpu-
> bibo   in   cs us sy id wa st
> 2527   698 3771 29394  1  0 89 10  0
>

us 94% also seems very high - maybe this hypervisor is overloaded with
other workloads?
wa 89% seems very high


>
>
> iotop -d 5 -k -o -P gives:
>
> Total DISK READ : 472.73 K/s | Total DISK WRITE :17.05 K/s
> Actual DISK READ:1113.23 K/s | Actual DISK WRITE:  55.86 K/s
>   PID  PRIO  USERDISK READ>  DISK WRITE  SWAPIN  IOCOMMAND
>
>  2124 be/4 sanlock   401.39 K/s0.20 K/s  0.00 %  0.00 % sanlock daemon
>  2146 be/4 vdsm   50.96 K/s0.00 K/s  0.00 %  0.00 % python
> /usr/share/o~a-broker --no-daemon
> 30379 be/0 root7.06 K/s0.00 K/s  0.00 % 98.09 % lvm vgck
> --config  ~50-a7fe-f83675a2d676
> 30380 be/0 root4.70 K/s0.00 K/s  0.00 % 98.09 % lvm lvchange
> --conf~59-b931-4eb61e43b56b
> 30381 be/0 root4.70 K/s0.00 K/s  0.00 % 98.09 % lvm lvchange
> --conf~83675a2d676/metadata
> 30631 be/0 root3.92 K/s0.00 K/s  0.00 % 98.09 % lvm vgs
> --config  d~f6-9466-553849aba5e9
>  2052 be/3 root0.00 K/s2.35 K/s  0.00 %  0.00 % [jbd2/dm-34-8]
>  6458 be/4 qemu0.00 K/s4.70 K/s  0.00 %  0.00 % qemu-kvm -name
> gues~x7 -msg timestamp=on
>  2064 be/3 root0.00 K/s0.00 K/s  0.00 %  0.00 % [jbd2/dm-32-8]
>  2147 be/4 root0.00 K/s4.70 K/s  0.00 %  0.00 % rsyslogd -n
>  9145 idle vdsm0.00 K/s0.59 K/s  0.00 % 24.52 % qemu-img
> convert -p~23-989a-4811f80956ca
> 13313 be/4 root0.00 K/s0.00 K/s  0.00 %  0.00 %
> [kworker/u112:3]
>  9399 idle vdsm0.00 K/s0.59 K/s  0.00 % 24.52 % qemu-img
> convert -p~51-9c8c-8d9aaa7e8f58
>

0.59 K/s seems extremely low, I don't expect such value.


>  1310 ?dif root0.00 K/s0.00 K/s  0.00 %  0.00 % multipathd
>  3996 be/4 vdsm0.00 K/s0.78 K/s  0.00 %  0.00 % python
> /usr/sbin/mo~c /etc/vdsm/mom.conf
>  6391 be/4 root0.00 K/s0.00 K/s  0.00 %  0.00 %
> [kworker/u112:0]
>  2059 be/3 root0.00 K/s3.14 K/s  0.00 %  0.00 % [jbd2/dm-33-8]
>
> Is it expected? Any way to speed up the process?
>

I would try to perform the same copy from the shell, without ionice
and nice, and see if this improves the times.

Can do a test with a small image (e.g 10g) running qemu-img with strace?

strace -f -o qemu-img.strace qemu-img convert \
   -p \
   -t none \
   -T none \
   -f raw \
   /dev/fad05d79-254d-4f40-8201-360757128ede/ \
   -O raw \
   /dev/6911716c-aa99-4750-a7fe-f83675a2d676/

and shared the trace?

Also version info (kernel, qemu) would be useful.

Adding Kevin.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Performance of cloning

2017-09-28 Thread Gianluca Cecchi
On Thu, Sep 28, 2017 at 11:02 AM, Gianluca Cecchi  wrote:

> Hello,
> I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a total
> of about 200Gb to copy
> The target I choose is on a different domain than the source one.
> They are both FC storage domains, with the source on SSD disks and the
> target on SAS disks.
>
> The disks are preallocated
>
> Now I have 3 processes of kind:
> /usr/bin/qemu-img convert -p -t none -T none -f raw
> /rhev/data-center/59b7af54-0155-01c2-0248-0195/
> fad05d79-254d-4f40-8201-360757128ede/images/8f62600a-
> 057d-4d59-9655-631f080a73f6/21a8812f-6a89-4015-a79e-150d7e202450 -O raw
> /rhev/data-center/mnt/blockSD/6911716c-aa99-4750-a7fe-
> f83675a2d676/images/c3973d1b-a168-4ec5-8c1a-630cfc4b66c4/
> 27980581-5935-4b23-989a-4811f80956ca
>
> but despite capabilities it seems it is copying using very low system
> resources.
>
> [snip]

>
> Is it expected? Any way to speed up the process?
>
> Thanks,
> Gianluca
>

The cloning process elapsed was 101'
The 3 disks are 85Gb, 20Gb and 80Gb so at the end an average of 30MB/s

At this moment I have only one host with self hosted engine VM running in
this environment, planning to add another host in short time.
So not yet configured power mgmt for fencing on it
During the cloning I saw these kind of events

Sep 28, 2017 10:31:30 AM VM vmclone1 creation was initiated by
admin@internal-authz.
Sep 28, 2017 11:16:38 AM VDSM command SetVolumeDescriptionVDS failed:
Message timeout which can be caused by communication issues

Sep 28, 2017 11:19:43 AM VDSM command SetVolumeDescriptionVDS failed:
Message timeout which can be caused by communication issues
Sep 28, 2017 11:19:43 AM Failed to update OVF disks
1504a878-4fe2-40df-a88f-6f073be0bd7b, 4ddac3ed-2bb9-485c-bf57-1750ac1fd761,
OVF data isn't updated on those OVF stores (Data Center DC1, Storage Domain
SDTEST).
At 11:24 I then start a pre-existing VM named benchvm and run a cpu / I/O
benchmark (HammerDB with 13 concurrent users; the VM is configured with 12
vcpus (1:6:2) and 64Gb of ram; it is not the one I'm cloning) that runs
from 11:40 to 12:02
Sep 28, 2017 11:24:29 AM VM benchvm started on Host host1
Sep 28, 2017 11:45:18 AM Host host1 is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
Sep 28, 2017 11:45:28 AM Failed to update OVF disks
1504a878-4fe2-40df-a88f-6f073be0bd7b, 4ddac3ed-2bb9-485c-bf57-1750ac1fd761,
OVF data isn't updated on those OVF stores (Data Center DC1, Storage Domain
SDTEST).
Sep 28, 2017 11:45:39 AM Status of host host1 was set to Up.
Sep 28, 2017 12:12:31 PM VM vmclone1 creation has been completed.

Any hint on the failures detected, both when only the cloning process was
in place and when a bench was running inside a VM?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] changing ip of host and its ovirtmgmt vlan

2017-09-28 Thread Michael Burman
This is a good question and to be honest, i really not sure what will
happen as we never tested such scenario in which the IPs of the hosts has
been changed, but the original hostname don't.

Alona, dan, can you please share your insights here?

Thanks,

On Wed, Sep 27, 2017 at 5:48 PM, Gianluca Cecchi 
wrote:

> On Wed, Sep 27, 2017 at 4:25 PM, Michael Burman 
> wrote:
>
>> Hello Gianluca,
>>
>> Not sure i fully understood, but, if the host's IP has changes and the
>> vlan then the correct flow will be:
>>
>> 1) Remove the host
>> 2) Edit the management network with vlan tag - the vlan you need/want
>> 3) Add/install the host - make sure you using the correct/new IP(if using
>> IP) or the correct FQDN(if has changed).
>>
>> Note that doing things manually on the host such as changing ovirtmgmt's
>> configuration without the engine or vdsm may cause problems and will not
>> persist the changes during reboots. If the host's IP has changed or it's
>> FQDN then you must install the host again.
>>
>> Cheers)
>>
>
> Original situation was:
>
> 1 DC
> 1 Cluster: CLA with 2 hosts: host1 and host2
> ovirtmgmt defined on vlan10
>
> engine is an external server on VLAN5 that can reach VLAN10 of hosts
> So far so good
>
> Need to add another host that is in another physical server room. Here
> VLAN10 is not present, so I cannot set ovirtmgmt
> If I understand correctly, the VLAN assigned to ovirtmgmt is a DC
> property: I cannot have different vlans assigned to ovirtmgmt in different
> clusters of the same DC, correct?
>
> So the path:
> Create a second cluster CLB and define on it the logical network
> ovirtmgmt2 on VLAN20 and set it as the mgmt network for that cluster
> Add the new host host3 to CLB.
>
> So far so good: the engine on VLAN5 is able to manage the hosts of CLA ad
> CLB with their mgmt networks in VLAN10 and VLAN20
>
> Now it is decided to create a new VLAN30 that is transportable across the
> the 2 physical locations and to have host1, host2, host3 to be part of a
> new CLC cluster where the mgmt network is now on VLAN30
>
> Can I simplify operations, as many VMs are already in place in CLA and CLB?
> So the question arises:
>
> the 3 hosts were added originally using their dns hostname and not their
> IP address.
> Can I change my dns settings so that the engine resolves the hostnames
> with the new IPs and change vlan of ovirtmgmt?
>
> And if I decide to start from scratch with this new cluster CLC on VLAN30,
> can I retain my old 3 hostnames (resolving to their new IPs)? How?
>
> Hope I was able to clarify a bit the scenario
>
> Gianluca
>
>
>
>


-- 

Michael Burman

Quality engineer - rhv network - redhat israel

Red Hat



mbur...@redhat.comM: 0545355725 IM: mburman

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Performance of cloning

2017-09-28 Thread Gianluca Cecchi
Hello,
I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a total of
about 200Gb to copy
The target I choose is on a different domain than the source one.
They are both FC storage domains, with the source on SSD disks and the
target on SAS disks.

The disks are preallocated

Now I have 3 processes of kind:
/usr/bin/qemu-img convert -p -t none -T none -f raw
/rhev/data-center/59b7af54-0155-01c2-0248-0195/fad05d79-254d-4f40-8201-360757128ede/images/8f62600a-057d-4d59-9655-631f080a73f6/21a8812f-6a89-4015-a79e-150d7e202450
-O raw
/rhev/data-center/mnt/blockSD/6911716c-aa99-4750-a7fe-f83675a2d676/images/c3973d1b-a168-4ec5-8c1a-630cfc4b66c4/27980581-5935-4b23-989a-4811f80956ca

but despite capabilities it seems it is copying using very low system
resources.
I see this both using iotop and vmstat

vmstat 3 gives:
io -system-- --cpu-
bibo   in   cs us sy id wa st
2527   698 3771 29394  1  0 89 10  0


iotop -d 5 -k -o -P gives:

Total DISK READ : 472.73 K/s | Total DISK WRITE :17.05 K/s
Actual DISK READ:1113.23 K/s | Actual DISK WRITE:  55.86 K/s
  PID  PRIO  USERDISK READ>  DISK WRITE  SWAPIN  IOCOMMAND

 2124 be/4 sanlock   401.39 K/s0.20 K/s  0.00 %  0.00 % sanlock daemon
 2146 be/4 vdsm   50.96 K/s0.00 K/s  0.00 %  0.00 % python
/usr/share/o~a-broker --no-daemon
30379 be/0 root7.06 K/s0.00 K/s  0.00 % 98.09 % lvm vgck
--config  ~50-a7fe-f83675a2d676
30380 be/0 root4.70 K/s0.00 K/s  0.00 % 98.09 % lvm lvchange
--conf~59-b931-4eb61e43b56b
30381 be/0 root4.70 K/s0.00 K/s  0.00 % 98.09 % lvm lvchange
--conf~83675a2d676/metadata
30631 be/0 root3.92 K/s0.00 K/s  0.00 % 98.09 % lvm vgs
--config  d~f6-9466-553849aba5e9
 2052 be/3 root0.00 K/s2.35 K/s  0.00 %  0.00 % [jbd2/dm-34-8]
 6458 be/4 qemu0.00 K/s4.70 K/s  0.00 %  0.00 % qemu-kvm -name
gues~x7 -msg timestamp=on
 2064 be/3 root0.00 K/s0.00 K/s  0.00 %  0.00 % [jbd2/dm-32-8]
 2147 be/4 root0.00 K/s4.70 K/s  0.00 %  0.00 % rsyslogd -n
 9145 idle vdsm0.00 K/s0.59 K/s  0.00 % 24.52 % qemu-img
convert -p~23-989a-4811f80956ca
13313 be/4 root0.00 K/s0.00 K/s  0.00 %  0.00 % [kworker/u112:3]
 9399 idle vdsm0.00 K/s0.59 K/s  0.00 % 24.52 % qemu-img
convert -p~51-9c8c-8d9aaa7e8f58
 1310 ?dif root0.00 K/s0.00 K/s  0.00 %  0.00 % multipathd
 3996 be/4 vdsm0.00 K/s0.78 K/s  0.00 %  0.00 % python
/usr/sbin/mo~c /etc/vdsm/mom.conf
 6391 be/4 root0.00 K/s0.00 K/s  0.00 %  0.00 % [kworker/u112:0]
 2059 be/3 root0.00 K/s3.14 K/s  0.00 %  0.00 % [jbd2/dm-33-8]

Is it expected? Any way to speed up the process?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-28 Thread Arman Khalatyan
thank you for explaining,
looks nice, I'll give it a try. :)


On Thu, Sep 28, 2017 at 9:35 AM, Yaniv Kaul  wrote:
>
>
>
> On Wed, Sep 27, 2017 at 9:00 PM, Arman Khalatyan  wrote:
>>
>> are there any reason to use here the openshift?
>> what is the role of the openshift in the whole software stack??
>
>
> Container orchestration platform. As all the common logging and metrics 
> packages are these days already delivered as containers, it made sense to run 
> them as such, on an enterprise platform.
>
> Note that you can easily deploy OpenShift on oVirt. See[1]. And we are 
> continuing the efforts to improve the integration.
> Y.
>
> [1]  
> https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/rhv-ansible
>>
>>
>>
>> Am 27.09.2017 3:31 nachm. schrieb "Shirly Radco" :
>>>
>>>
>>>
>>> --
>>>
>>> SHIRLY RADCO
>>>
>>> BI SOFTWARE ENGINEER
>>>
>>> Red Hat Israel
>>>
>>> TRIED. TESTED. TRUSTED.
>>>
>>> On Wed, Sep 27, 2017 at 4:26 PM, Arman Khalatyan  wrote:

 Thank you for clarification,
 So in the future you are going to push everything to kibana as a storage, 
 what about dashboards or some kind of reports views.
 Are you going to provide some reports templates as before in dwh in 3.0.6 
 eg heatmaps etc..?
>>>
>>>
>>> Templates for monitoring.Yes.
>>>

 From the 
 https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/
  one need to install OpenShift+kibana, Why then openshift not Ovirt??
>>>
>>>
>>> Openshift will run elasticsearch, fluentd, kibana, curator.
>>> This is the platform that was chosen as the common metrics and logging 
>>> solution at this point.
>>>


 On Wed, Sep 27, 2017 at 9:56 AM, Shirly Radco  wrote:
>
> Hello Arman,
>
> Reports was deprecated in 4.0.
>
> DWH is now installed by default with oVirt engine.
> You can refer to https://www.ovirt.org/documentation/how-to/reports/dwh/
>
> You can change its scale to save longer period of time if you want
> https://www.ovirt.org/documentation/data-warehouse/Changing_the_Data_Warehouse_Sampling_Scale/
> and attach a reports solution that supports sql.
>
> I'll update the docs with the information about reports and fix the links.
>
> Thank you for reaching out on this issue.
>
> We are currently also working on adding oVirt Metrics solution that you 
> can read about at
> https://www.ovirt.org/develop/release-management/features/metrics/metrics-store/
> It is still in development stages.
>
> Best regards,
>
> --
>
> SHIRLY RADCO
>
> BI SOFTWARE ENGINEER
>
> Red Hat Israel
>
> TRIED. TESTED. TRUSTED.
>
> On Mon, Sep 25, 2017 at 11:39 AM, Arman Khalatyan  
> wrote:
>>
>> Dear Ovirt documents maintainers is this document still valid?
>> https://www.ovirt.org/documentation/data-warehouse/Data_Warehouse_Guide/
>> When I go one level up it is bringing an empty page:
>> https://www.ovirt.org/documentation/data-warehouse/
>>
>> Thanks,
>> Arman.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>

>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] libvirt: XML-RPC error : authentication failed: Failed to start SASL

2017-09-28 Thread Yaniv Kaul
On Wed, Sep 27, 2017 at 7:01 PM, VONDRA Alain  wrote:

> Hello,
>
> I have exactly the same problem after an upgrade from CentOS 7.3 to 7.4,
> but I don’t want to plan now the migration to oVirt 4.x.
>
> Can you help me to correct the bug and keep oVirt 3.6 for a few months ?
>
> It really seems to be a modification in libvirt authentication because
> when I comment out
>
> #auth_unix_rw="sasl"
>
> in libvirtd.conf, libvirtd starts but my Host is still unresponsive in
> oVirt.
>
> My production environment  is running on a single Hypervisor and I need
> the second one.
>
> Thanks
>

The fix is[1]. I suppose you need to change:
mech_list: scram-sha-1

Y.

[1] https://gerrit.ovirt.org/#/c/76934/

>
>
>
>
> *De :* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *De la
> part de* Yaniv Kaul
> *Envoyé :* mardi 19 septembre 2017 13:36
> *À :* Ozan Uzun 
> *Cc :* Ovirt Users 
> *Objet :* Re: [ovirt-users] libvirt: XML-RPC error : authentication
> failed: Failed to start SASL
>
>
>
>
>
>
>
> On Tue, Sep 19, 2017 at 12:24 PM, Ozan Uzun  wrote:
>
> --
>
> *Alain VONDRA   *
> *Chargé d'Exploitation et de Sécurité des Systèmes d'Information   *
> *Direction Administrative et Financière*
> * +33 1 44 39 77 76 <+33%201%2044%2039%2077%2076> *
>
> *UNICEF France 3 rue Duguay Trouin
> 
>   75006
> PARIS
> *
> * www.unicef.fr  *
> 
>
> 
>
>  
> 
>
>
>
> --
> 
>
> After hours of struggle, I removed all the hosts.
>
> Installed a fresh centos 6.x on a host. Now it works like a charm.
>
>
>
> I will install a fresh ovirt 4.x, and start migration my vm's on new
> centos 7.4 hosts.
>
>
>
> The only supported way seems exporting/importing vm's for different ovirt
> engines. I wish  I had plain  qcow2 images to copy...
>
>
>
>
>
> You could detach and attach a whole storage domain.
>
> Y.
>
>
>
>
>
> On Tue, 19 Sep 2017 at 10:18, Yaniv Kaul  wrote:
>
> On Mon, Sep 18, 2017 at 11:47 PM, Ozan Uzun  wrote:
>
> Hello,
>
> Today I updated my ovirt engine v3.5 and all my hosts on one datacenter
> (centos 7.4 ones).
>
>
>
> You are mixing an ancient release (oVirt 3.5) with the latest CentOS. This
> is not supported at best, and who knows if it works.
>
>
>
> and suddenly  my vdsm and vdsm-network  services stopped working.
>
> btw: My other DC is centos 6 based (managed from the same ovirt engine),
> everything works just fine there.
>
>
>
> vdsm fails dependent on vdsm-network service, with lots of RPC error.
>
> I tried to configure vdsm-tool configure --force, deleted everything
> (vdsm-libvirt), reinstalled.
>
> Could not make it work.
>
> My logs are filled with the follogin
>
> Sep 18 23:06:01 node6 python[5340]: GSSAPI Error: Unspecified GSS
> failure.  Minor code may provide more information (No Kerberos credentials
> available (default cache: KEYRING:persistent:0))
>
>
>
> This may sound like a change that happened in libvirt authentication,
> which we've adjusted to in oVirt 4.1.5 (specifically VDSM) I believe.
>
> Y.
>
>
>
> Sep 18 23:06:01 node6 vdsm-tool[5340]: libvirt: XML-RPC error :
> authentication failed: Failed to start SASL negotiation: -1 (SASL(-1):
> generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may
> provide more information (No Kerberos credent
> Sep 18 23:06:01 node6 libvirtd[4312]: 2017-09-18 20:06:01.954+: 4312:
> error : virNetSocketReadWire:1808 : End of file while reading data:
> Input/output error
>
> ---
>
> journalctl -xe output for vdsm-network
>
>
> Sep 18 23:06:02 node6 vdsm-tool[5340]: libvirt: XML-RPC error :
> authentication failed: Failed to start SASL negotiation: -1 (SASL(-1):
> generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may
> provide more information (No Kerberos credent
> Sep 18 23:06:02 node6 vdsm-tool[5340]: Traceback (most recent call last):
> Sep 18 23:06:02 node6 vdsm-tool[5340]: File "/usr/bin/vdsm-tool", line
> 219, in main
> Sep 18 23:06:02 node6 libvirtd[4312]: 2017-09-18 20:06:02.558+: 4312:
> error : virNetSocketReadWire:1808 : End of file while reading data:
> Input/output error
> Sep 18 23:06:02 node6 vdsm-tool[5340]: return
> tool_command[cmd]["command"](*args)
> Sep 18 23:06:02 node6 vdsm-tool[5340]: File "/usr/lib/python2.7/site-
> packages/vdsm/tool/upgrade_300_networks.py", line 83, in upgrade_networks
> Sep 18 23:06:02 node6 vdsm-tool[5340]: networks = netinfo.networks()
> Sep 18 23:06:02 node6 vdsm-tool[5340]: File 
> "/usr/lib/python2.7/site-packages/vdsm/netinfo.py",
> line 112, in networks
> Sep 18 23:06:02 

Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-28 Thread Yaniv Kaul
On Wed, Sep 27, 2017 at 9:00 PM, Arman Khalatyan  wrote:

> are there any reason to use here the openshift?
> what is the role of the openshift in the whole software stack??
>

Container orchestration platform. As all the common logging and metrics
packages are these days already delivered as containers, it made sense to
run them as such, on an enterprise platform.

Note that you can easily deploy OpenShift on oVirt. See[1]. And we are
continuing the efforts to improve the integration.
Y.

[1]
https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/rhv-ansible

>
>
> Am 27.09.2017 3:31 nachm. schrieb "Shirly Radco" :
>
>>
>>
>> --
>>
>> SHIRLY RADCO
>>
>> BI SOFTWARE ENGINEER
>>
>> Red Hat Israel 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>> On Wed, Sep 27, 2017 at 4:26 PM, Arman Khalatyan 
>> wrote:
>>
>>> Thank you for clarification,
>>> So in the future you are going to push everything to kibana as a
>>> storage, what about dashboards or some kind of reports views.
>>> Are you going to provide some reports templates as before in dwh in
>>> 3.0.6 eg heatmaps etc..?
>>>
>>
>> Templates for monitoring.Yes.
>>
>>
>>> From the https://www.ovirt.org/develop/release-management/feature
>>> s/metrics/metrics-store-installation/ one need to install
>>> OpenShift+kibana, Why then openshift not Ovirt??
>>>
>>
>> Openshift will run elasticsearch, fluentd, kibana, curator.
>> This is the platform that was chosen as the common metrics and logging
>> solution at this point.
>>
>>
>>>
>>> On Wed, Sep 27, 2017 at 9:56 AM, Shirly Radco  wrote:
>>>
 Hello Arman,

 Reports was deprecated in 4.0.

 DWH is now installed by default with oVirt engine.
 You can refer to https://www.ovirt.org/docum
 entation/how-to/reports/dwh/

 You can change its scale to save longer period of time if you want
 https://www.ovirt.org/documentation/data-warehouse/Changing_
 the_Data_Warehouse_Sampling_Scale/
 and attach a reports solution that supports sql.

 I'll update the docs with the information about reports and fix the
 links.

 Thank you for reaching out on this issue.

 We are currently also working on adding oVirt Metrics solution that you
 can read about at
 https://www.ovirt.org/develop/release-management/features/me
 trics/metrics-store/
 It is still in development stages.

 Best regards,

 --

 SHIRLY RADCO

 BI SOFTWARE ENGINEER

 Red Hat Israel 
 
 TRIED. TESTED. TRUSTED. 

 On Mon, Sep 25, 2017 at 11:39 AM, Arman Khalatyan 
 wrote:

> Dear Ovirt documents maintainers is this document still valid?
> https://www.ovirt.org/documentation/data-warehouse/Data_Ware
> house_Guide/
> When I go one level up it is bringing an empty page:
> https://www.ovirt.org/documentation/data-warehouse/
>
> Thanks,
> Arman.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

>>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-28 Thread Yaniv Kaul
On Wed, Sep 27, 2017 at 10:59 PM, Ben Bradley  wrote:

> Hi All
>
> I'm looking to add a new host to my oVirt lab installation.
> I'm going to share out some LVs from a separate box over iSCSI and will
> hook the new host up to that.
> I have 2 NICs on the storage host and 2 NICs on the new Ovirt host to
> dedicate to the iSCSI traffic.
> I also have 2 separate switches so I'm looking for redundancy here. Both
> iSCSI host and oVirt host plugged into both switches.
>
> If this was non-iSCSI traffic and without oVirt I would create bonded
> interfaces in active-backup mode and layer the VLANs on top of that.
>
> But for iSCSI traffic without oVirt involved I wouldn't bother with a bond
> and just use multipath.
>
> From scanning the oVirt docs it looks like there is an option to have
> oVirt configure iSCSI multipathing.
>

Look for iSCSI bonding - that's the feature you are looking for.



>
> So what's the best/most-supported option for oVirt?
> Manually create active-backup bonds so oVirt just sees a single storage
> link between host and storage?
> Or leave them as separate interfaces on each side and use oVirt's
> multipath/bonding?
>
> Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely down
> to the fact I could use link-local addressing and not have to worry about
> setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI supported by oVirt?
>

No, we do not. There has been some work in the area[1], but I'm not sure it
is complete.
Y.

[1]
https://gerrit.ovirt.org/#/q/status:merged+project:vdsm+branch:master+topic:ipv6-iscsi-target-support


>
> Thanks, Ben
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failure while using ovirt-image-template role

2017-09-28 Thread Yaniv Kaul
On Thu, Sep 28, 2017 at 1:23 AM, Marc Seward  wrote:

> Hi,
>
> I'm trying to use the ovirt-image-template role to import a Glance image
> as a template into ovirt and I'm running into this error with
> python-ovirt-engine-sdk4-4.1.6-1.el7ev.x86_64
>
> I'd appreciate any pointers.
>
>
> TASK [ovirt.ovirt-ansible-roles/roles/ovirt-image-template : Find data
> domain] 
> 
> task path: /etc/ansible/roles/ovirt.ovirt-ansible-roles/roles/
> ovirt-image-template/tasks/glance_image.yml:21
> fatal: [localhost]: FAILED! => {
> "failed": true,
> "msg": "You need to install \"jmespath\" prior to running json_query
> filter"
>

I suggest you follow the advice and install python-jmespath package.
Y.


> }
>
> TASK [ovirt.ovirt-ansible-roles/roles/ovirt-image-template : Logout from
> oVirt] ***
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users