Re: [ovirt-users] 10Gb Networking with OVN

2017-12-11 Thread Dominik Holler
Is there an indication that the VMs will not take advantage of 10Gb?

On Thu, 7 Dec 2017 15:27:25 -0700
Beau Sapach  wrote:

> Hello everyone,
> 
> I see here:
> https://www.ovirt.org/blog/2017/09/introducing-ovirt-4.2.0/ that
> version 4.2 will have OVN support.  Does anyone know if this will
> allow VMs to take advantage of 10Gb networking without needing SR-IOV?
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debugging why hosted engine flips between EngineUp and EngineBadHealth

2017-12-11 Thread Luca 'remix_tj' Lorenzetto
Hi Martin, Hi all,

*some minutes* has passed and i've the piece of log i'm looking at.

​
 broker.log-upbadup

​

I got this morning a notice about enginebadhealth and engineup flip. I'm
not able to identify nothing that could have caused this, because up to
some seconds before the bad health report everything is ok... Do you notice
anything strange?

Thank you,

Luca


2017-12-04 12:00 GMT+01:00 Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com>:

> On Mon, Dec 4, 2017 at 9:31 AM, Martin Sivak  wrote:
> > Hi,
> >
> > please attach the log. You can grep out the connected / disconnected
> lines.
> >
> > Look for engine health monitor lines.
> >
> > Martin
>
>
> Log is quite big (about 1.5GB). I'm filtering out the messages around
> the last report of EngineBadHealth <-> EngineUp.
>
> I'll upload in some minutes.
>
> Luca
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
>



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
lorenzetto.l...@gmail.com>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.1.8 Release is now available

2017-12-11 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.1.8 release, as of December 11th, 2017

This update is the eighth in a series of stabilization updates to the 4.1
series.

This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.1

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Live is already available [2]
- oVirt Node will be available soon [2]

Additional Resources:
* Read more about the oVirt 4.1.8 release highlights:
http://www.ovirt.org/release/4.1.8/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.1.8/
[2] http://resources.ovirt.org/pub/ovirt-4.1/iso/


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-2rc hosted-engine don't boot error:cannot allocate kernel buffer

2017-12-11 Thread Maton, Brett
Hi Roberto can you check how much RAM is allocated to the HE VM ?


virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf

virsh # dominfo HostedEngine


The last update I did seems to have changed the HE RAM from 4GB to 4MB!


On 11 December 2017 at 09:08, Simone Tiraboschi  wrote:

>
>
> On Mon, Dec 11, 2017 at 9:47 AM, Roberto Nunin  wrote:
>
>> Hello all
>>
>> during weekend, I've re-tried to deploy my 4.2_rc lab.
>> Everything was fine, apart the fact host 2 and 3 weren't imported. I had
>> to add them to the cluster manually, with the NEW function.
>> After this Gluster volumes were added fine to the environment.
>>
>> Next engine deploy on nodes 2 and 3, ended with ok status.
>>
>> Tring to migrate HE from host 1 to host 2 was fine, the same from host 2
>> to host 3.
>>
>> After these two attempts, no way to migrate HE back to any host.
>> Tried Maintenance mode set to global, reboot the HE and now I'm in the
>> same condition reported below, not anymore able to boot the HE.
>>
>> Here's hosted-engine --vm-status:
>>
>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>
>>
>>
>> --== Host 1 status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date  : True
>> Hostname   : aps-te61-mng.example.com
>> Host ID: 1
>> Engine status  : {"reason": "vm not running on this
>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>> Score  : 3400
>> stopped: False
>> Local maintenance  : False
>> crc32  : 7dfc420b
>> local_conf_timestamp   : 181953
>> Host timestamp : 181952
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=181952 (Mon Dec 11 09:21:46 2017)
>> host-id=1
>> score=3400
>> vm_conf_refresh_time=181953 (Mon Dec 11 09:21:47 2017)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=GlobalMaintenance
>> stopped=False
>>
>>
>> --== Host 2 status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date  : True
>> Hostname   : aps-te64-mng.example.com
>> Host ID: 2
>> Engine status  : {"reason": "vm not running on this
>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>> Score  : 3400
>> stopped: False
>> Local maintenance  : False
>> crc32  : 67c7dd1d
>> local_conf_timestamp   : 181946
>> Host timestamp : 181946
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=181946 (Mon Dec 11 09:21:49 2017)
>> host-id=2
>> score=3400
>> vm_conf_refresh_time=181946 (Mon Dec 11 09:21:49 2017)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=GlobalMaintenance
>> stopped=False
>>
>>
>> --== Host 3 status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date  : True
>> Hostname   : aps-te68-mng.example.com
>> Host ID: 3
>> Engine status  : {"reason": "failed liveliness
>> check", "health": "bad", "vm": "up", "detail": "Up"}
>> Score  : 3400
>> stopped: False
>> Local maintenance  : False
>> crc32  : 4daea041
>> local_conf_timestamp   : 181078
>> Host timestamp : 181078
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=181078 (Mon Dec 11 09:21:53 2017)
>> host-id=3
>> score=3400
>> vm_conf_refresh_time=181078 (Mon Dec 11 09:21:53 2017)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=GlobalMaintenance
>> stopped=False
>>
>>
>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>
>> (it is in global maintenance to avoid messages to be sent to admin
>> mailbox).
>>
>
> As soon as you exit the global maintenance mode, one of the hosts should
> take care of automatically restarting the engine VM within a couple of
> minutes.
>
> If you want to manually start the engine VM over a specific host while in
> maintenance mode you can use:
> hosted-engine --vm-start
> on the specific host
>
>
>>
>> Engine image is available on all three hosts, gluster is working fine:
>>
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: 95355a0b-1f45-4329-95c7-604682e812d0
>> Status: Started
>> Snapshot Count: 0
>> 

Re: [ovirt-users] bonding mode-alb

2017-12-11 Thread Simone Tiraboschi
On Mon, Dec 11, 2017 at 12:53 PM, Demeter Tibor  wrote:

> Hi,
>
> Could help anyone for me in this question?
>
Thanks.
>
> R.
>
> Tibor
>
>
>
> - 2017. dec.. 6., 14:07, Demeter Tibor  írta:
>
> Dear members,
>
> I would like to use two switch to make high-availability network
> connection for my nfs storage.
> Unfortunately, these switches does not support 802.3.ad lacp, (really I
> can't stack them)  but I've read about mode-alb and mode-tlb bonding
> modes.
> I know,these modes are available in ovirt, but how is work that? Also how
> is safe? Are there for HA or for load balance?
>
> I've read some forums, where does not recommended these modes to use in
> ovirt. What is the truths?
> I would like to use only for storage-traffic, it will be separated from
> other network traffic. I have two 10Gbe switches and two 10Gbe ports in my
> nodes.
>
>
Look here:
https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/#bonds-1

"Important: Modes 1, 2, 3, and 4 support both virtual machine (bridged) and
non-virtual machine (bridgeless) network types. Modes 0, 5 and 6 support
non-virtual machine (bridgeless) networks only.¨

So both tlb (mode 5) or alb (mode 6) are fine only if you use them just for
the storage access but not for VMs.


> Thanks in advance,
>
> R
>
> Tibor
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-2rc hosted-engine don't boot error:cannot allocate kernel buffer

2017-12-11 Thread Simone Tiraboschi
On Mon, Dec 11, 2017 at 9:47 AM, Roberto Nunin  wrote:

> Hello all
>
> during weekend, I've re-tried to deploy my 4.2_rc lab.
> Everything was fine, apart the fact host 2 and 3 weren't imported. I had
> to add them to the cluster manually, with the NEW function.
> After this Gluster volumes were added fine to the environment.
>
> Next engine deploy on nodes 2 and 3, ended with ok status.
>
> Tring to migrate HE from host 1 to host 2 was fine, the same from host 2
> to host 3.
>
> After these two attempts, no way to migrate HE back to any host.
> Tried Maintenance mode set to global, reboot the HE and now I'm in the
> same condition reported below, not anymore able to boot the HE.
>
> Here's hosted-engine --vm-status:
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : aps-te61-mng.example.com
> Host ID: 1
> Engine status  : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 7dfc420b
> local_conf_timestamp   : 181953
> Host timestamp : 181952
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=181952 (Mon Dec 11 09:21:46 2017)
> host-id=1
> score=3400
> vm_conf_refresh_time=181953 (Mon Dec 11 09:21:47 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=GlobalMaintenance
> stopped=False
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : aps-te64-mng.example.com
> Host ID: 2
> Engine status  : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 67c7dd1d
> local_conf_timestamp   : 181946
> Host timestamp : 181946
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=181946 (Mon Dec 11 09:21:49 2017)
> host-id=2
> score=3400
> vm_conf_refresh_time=181946 (Mon Dec 11 09:21:49 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=GlobalMaintenance
> stopped=False
>
>
> --== Host 3 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : aps-te68-mng.example.com
> Host ID: 3
> Engine status  : {"reason": "failed liveliness check",
> "health": "bad", "vm": "up", "detail": "Up"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 4daea041
> local_conf_timestamp   : 181078
> Host timestamp : 181078
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=181078 (Mon Dec 11 09:21:53 2017)
> host-id=3
> score=3400
> vm_conf_refresh_time=181078 (Mon Dec 11 09:21:53 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=GlobalMaintenance
> stopped=False
>
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
> (it is in global maintenance to avoid messages to be sent to admin
> mailbox).
>

As soon as you exit the global maintenance mode, one of the hosts should
take care of automatically restarting the engine VM within a couple of
minutes.

If you want to manually start the engine VM over a specific host while in
maintenance mode you can use:
hosted-engine --vm-start
on the specific host


>
> Engine image is available on all three hosts, gluster is working fine:
>
> Volume Name: engine
> Type: Replicate
> Volume ID: 95355a0b-1f45-4329-95c7-604682e812d0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: aps-te61-mng.example.com:/gluster_bricks/engine/engine
> Brick2: aps-te64-mng.example.com:/gluster_bricks/engine/engine
> Brick3: aps-te68-mng.example.com:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> 

Re: [ovirt-users] 4-2rc hosted-engine don't boot error:cannot allocate kernel buffer

2017-12-11 Thread Maton, Brett
Really short version ( I can't find the link to the ovirt doc at the moment)

Put hosted engine in to global maintenance and power off the vm
(hosted-engine command).

On one of your physical hosts, make a copy of he config and update the
memory

cp /var/run/ovirt-hosted-engine-ha/vm.conf .
vim vm.conf

Then start hosted engine with the new config


hosted-engine --vm-start --vm-conf=./vm.conf



On 11 December 2017 at 10:36, Roberto Nunin  wrote:

>
>
> 2017-12-11 10:32 GMT+01:00 Maton, Brett :
>
>> Hi Roberto can you check how much RAM is allocated to the HE VM ?
>>
>>
>> virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
>>
>> virsh # dominfo HostedEngine
>>
>>
>> The last update I did seems to have changed the HE RAM from 4GB to 4MB!
>>
>
>
> ​Yes, you're right.. ​
>
> virsh # dominfo HostedEngine
> Id: 191
> Name:   HostedEngine
> UUID:   6831dd96-af48-4673-ac98-f1b9ba60754b
> OS Type:hvm
> State:  running
> CPU(s): 4
> CPU time:   9053.7s
> Max memory: 4096 KiB
> Used memory:4096 KiB
> Persistent: yes
> Autostart:  disable
> Managed save:   no
> Security model: selinux
> Security DOI:   0
> Security label: system_u:system_r:svirt_t:s0:c201,c408 (enforcing)
>
>
>>
>> On 11 December 2017 at 09:08, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Mon, Dec 11, 2017 at 9:47 AM, Roberto Nunin 
>>> wrote:
>>>
 Hello all

 during weekend, I've re-tried to deploy my 4.2_rc lab.
 Everything was fine, apart the fact host 2 and 3 weren't imported. I
 had to add them to the cluster manually, with the NEW function.
 After this Gluster volumes were added fine to the environment.

 Next engine deploy on nodes 2 and 3, ended with ok status.

 Tring to migrate HE from host 1 to host 2 was fine, the same from host
 2 to host 3.

 After these two attempts, no way to migrate HE back to any host.
 Tried Maintenance mode set to global, reboot the HE and now I'm in the
 same condition reported below, not anymore able to boot the HE.

 Here's hosted-engine --vm-status:

 !! Cluster is in GLOBAL MAINTENANCE mode !!



 --== Host 1 status ==--

 conf_on_shared_storage : True
 Status up-to-date  : True
 Hostname   : aps-te61-mng.example.com
 Host ID: 1
 Engine status  : {"reason": "vm not running on this
 host", "health": "bad", "vm": "down", "detail": "unknown"}
 Score  : 3400
 stopped: False
 Local maintenance  : False
 crc32  : 7dfc420b
 local_conf_timestamp   : 181953
 Host timestamp : 181952
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=181952 (Mon Dec 11 09:21:46 2017)
 host-id=1
 score=3400
 vm_conf_refresh_time=181953 (Mon Dec 11 09:21:47 2017)
 conf_on_shared_storage=True
 maintenance=False
 state=GlobalMaintenance
 stopped=False


 --== Host 2 status ==--

 conf_on_shared_storage : True
 Status up-to-date  : True
 Hostname   : aps-te64-mng.example.com
 Host ID: 2
 Engine status  : {"reason": "vm not running on this
 host", "health": "bad", "vm": "down", "detail": "unknown"}
 Score  : 3400
 stopped: False
 Local maintenance  : False
 crc32  : 67c7dd1d
 local_conf_timestamp   : 181946
 Host timestamp : 181946
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=181946 (Mon Dec 11 09:21:49 2017)
 host-id=2
 score=3400
 vm_conf_refresh_time=181946 (Mon Dec 11 09:21:49 2017)
 conf_on_shared_storage=True
 maintenance=False
 state=GlobalMaintenance
 stopped=False


 --== Host 3 status ==--

 conf_on_shared_storage : True
 Status up-to-date  : True
 Hostname   : aps-te68-mng.example.com
 Host ID: 3
 Engine status  : {"reason": "failed liveliness
 check", "health": "bad", "vm": "up", "detail": "Up"}
 Score  : 3400
 stopped: False
 

Re: [ovirt-users] 4-2rc hosted-engine don't boot error:cannot allocate kernel buffer

2017-12-11 Thread Martin Sivak
Hi,

we also have a proper fix now and will release it with the next RC build.

Best regards

--
Martin Sivak
SLA / oVirt

On Mon, Dec 11, 2017 at 12:10 PM, Maton, Brett 
wrote:

> Really short version ( I can't find the link to the ovirt doc at the
> moment)
>
> Put hosted engine in to global maintenance and power off the vm
> (hosted-engine command).
>
> On one of your physical hosts, make a copy of he config and update the
> memory
>
> cp /var/run/ovirt-hosted-engine-ha/vm.conf .
> vim vm.conf
>
> Then start hosted engine with the new config
>
>
> hosted-engine --vm-start --vm-conf=./vm.conf
>
>
>
> On 11 December 2017 at 10:36, Roberto Nunin  wrote:
>
>>
>>
>> 2017-12-11 10:32 GMT+01:00 Maton, Brett :
>>
>>> Hi Roberto can you check how much RAM is allocated to the HE VM ?
>>>
>>>
>>> virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.
>>> conf
>>>
>>> virsh # dominfo HostedEngine
>>>
>>>
>>> The last update I did seems to have changed the HE RAM from 4GB to 4MB!
>>>
>>
>>
>> ​Yes, you're right.. ​
>>
>> virsh # dominfo HostedEngine
>> Id: 191
>> Name:   HostedEngine
>> UUID:   6831dd96-af48-4673-ac98-f1b9ba60754b
>> OS Type:hvm
>> State:  running
>> CPU(s): 4
>> CPU time:   9053.7s
>> Max memory: 4096 KiB
>> Used memory:4096 KiB
>> Persistent: yes
>> Autostart:  disable
>> Managed save:   no
>> Security model: selinux
>> Security DOI:   0
>> Security label: system_u:system_r:svirt_t:s0:c201,c408 (enforcing)
>>
>>
>>>
>>> On 11 December 2017 at 09:08, Simone Tiraboschi 
>>> wrote:
>>>


 On Mon, Dec 11, 2017 at 9:47 AM, Roberto Nunin 
 wrote:

> Hello all
>
> during weekend, I've re-tried to deploy my 4.2_rc lab.
> Everything was fine, apart the fact host 2 and 3 weren't imported. I
> had to add them to the cluster manually, with the NEW function.
> After this Gluster volumes were added fine to the environment.
>
> Next engine deploy on nodes 2 and 3, ended with ok status.
>
> Tring to migrate HE from host 1 to host 2 was fine, the same from host
> 2 to host 3.
>
> After these two attempts, no way to migrate HE back to any host.
> Tried Maintenance mode set to global, reboot the HE and now I'm in the
> same condition reported below, not anymore able to boot the HE.
>
> Here's hosted-engine --vm-status:
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : aps-te61-mng.example.com
> Host ID: 1
> Engine status  : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 7dfc420b
> local_conf_timestamp   : 181953
> Host timestamp : 181952
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=181952 (Mon Dec 11 09:21:46 2017)
> host-id=1
> score=3400
> vm_conf_refresh_time=181953 (Mon Dec 11 09:21:47 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=GlobalMaintenance
> stopped=False
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : aps-te64-mng.example.com
> Host ID: 2
> Engine status  : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 67c7dd1d
> local_conf_timestamp   : 181946
> Host timestamp : 181946
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=181946 (Mon Dec 11 09:21:49 2017)
> host-id=2
> score=3400
> vm_conf_refresh_time=181946 (Mon Dec 11 09:21:49 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=GlobalMaintenance
> stopped=False
>
>
> --== Host 3 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date   

Re: [ovirt-users] 4-2rc hosted-engine don't boot error:cannot allocate kernel buffer

2017-12-11 Thread Artyom Lukianov
I opened the bug because I had the same issue
https://bugzilla.redhat.com/show_bug.cgi?id=1524331.

Best Regards


On Mon, Dec 11, 2017 at 11:32 AM, Maton, Brett 
wrote:

> Hi Roberto can you check how much RAM is allocated to the HE VM ?
>
>
> virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
>
> virsh # dominfo HostedEngine
>
>
> The last update I did seems to have changed the HE RAM from 4GB to 4MB!
>
>
> On 11 December 2017 at 09:08, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Dec 11, 2017 at 9:47 AM, Roberto Nunin 
>> wrote:
>>
>>> Hello all
>>>
>>> during weekend, I've re-tried to deploy my 4.2_rc lab.
>>> Everything was fine, apart the fact host 2 and 3 weren't imported. I had
>>> to add them to the cluster manually, with the NEW function.
>>> After this Gluster volumes were added fine to the environment.
>>>
>>> Next engine deploy on nodes 2 and 3, ended with ok status.
>>>
>>> Tring to migrate HE from host 1 to host 2 was fine, the same from host 2
>>> to host 3.
>>>
>>> After these two attempts, no way to migrate HE back to any host.
>>> Tried Maintenance mode set to global, reboot the HE and now I'm in the
>>> same condition reported below, not anymore able to boot the HE.
>>>
>>> Here's hosted-engine --vm-status:
>>>
>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>
>>>
>>>
>>> --== Host 1 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : True
>>> Hostname   : aps-te61-mng.example.com
>>> Host ID: 1
>>> Engine status  : {"reason": "vm not running on this
>>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>>> Score  : 3400
>>> stopped: False
>>> Local maintenance  : False
>>> crc32  : 7dfc420b
>>> local_conf_timestamp   : 181953
>>> Host timestamp : 181952
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=181952 (Mon Dec 11 09:21:46 2017)
>>> host-id=1
>>> score=3400
>>> vm_conf_refresh_time=181953 (Mon Dec 11 09:21:47 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> --== Host 2 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : True
>>> Hostname   : aps-te64-mng.example.com
>>> Host ID: 2
>>> Engine status  : {"reason": "vm not running on this
>>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>>> Score  : 3400
>>> stopped: False
>>> Local maintenance  : False
>>> crc32  : 67c7dd1d
>>> local_conf_timestamp   : 181946
>>> Host timestamp : 181946
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=181946 (Mon Dec 11 09:21:49 2017)
>>> host-id=2
>>> score=3400
>>> vm_conf_refresh_time=181946 (Mon Dec 11 09:21:49 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> --== Host 3 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : True
>>> Hostname   : aps-te68-mng.example.com
>>> Host ID: 3
>>> Engine status  : {"reason": "failed liveliness
>>> check", "health": "bad", "vm": "up", "detail": "Up"}
>>> Score  : 3400
>>> stopped: False
>>> Local maintenance  : False
>>> crc32  : 4daea041
>>> local_conf_timestamp   : 181078
>>> Host timestamp : 181078
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=181078 (Mon Dec 11 09:21:53 2017)
>>> host-id=3
>>> score=3400
>>> vm_conf_refresh_time=181078 (Mon Dec 11 09:21:53 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>
>>> (it is in global maintenance to avoid messages to be sent to admin
>>> mailbox).
>>>
>>
>> As soon as you exit the global maintenance mode, one of the hosts should
>> take care of automatically restarting the engine VM within a couple of
>> minutes.
>>
>> If you want to manually start the engine VM 

Re: [ovirt-users] bonding mode-alb

2017-12-11 Thread Demeter Tibor
Hi, 

Could help anyone for me in this question? 
Thanks. 

R. 

Tibor 

- 2017. dec.. 6., 14:07, Demeter Tibor  írta: 

> Dear members,

> I would like to use two switch to make high-availability network connection 
> for
> my nfs storage.
> Unfortunately, these switches does not support 802.3.ad lacp, (really I can't
> stack them) but I've read about mode-alb and mode-tlb bonding modes.
> I know,these modes are available in ovirt, but how is work that? Also how is
> safe? Are there for HA or for load balance?

> I've read some forums, where does not recommended these modes to use in ovirt.
> What is the truths?
> I would like to use only for storage-traffic, it will be separated from other
> network traffic. I have two 10Gbe switches and two 10Gbe ports in my nodes.

> Thanks in advance,

> R

> Tibor

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] bonding mode-alb

2017-12-11 Thread FERNANDO FREDIANI

Hello

If you have 10Gb ports you hardly need to use these aggregation in order 
to have more bandwidth. 10Gb is enough for A LOT of things. Just use 
bonding mode=1 (active/backup) if your switches don't support stacking.


Doing mode-tlb and alb is not always straight forward as mode 1 or mode 4.

Fernando


On 11/12/2017 09:53, Demeter Tibor wrote:

Hi,

Could help anyone for me in this question?
Thanks.

R.

Tibor



- 2017. dec.. 6., 14:07, Demeter Tibor  írta:

Dear members,

I would like to use two switch to make high-availability network
connection for my nfs storage.
Unfortunately, these switches does not support 802.3.ad lacp,
(really I can't stack them)  but I've read about mode-alb and
mode-tlb bonding modes.
I know,these modes are available in ovirt, but how is work that?
Also how is safe? Are there for HA or for load balance?

I've read some forums, where does not recommended these modes to
use in ovirt. What is the truths?
I would like to use only for storage-traffic, it will be separated
from other network traffic. I have two 10Gbe switches and two
10Gbe ports in my nodes.

Thanks in advance,

R

Tibor

**


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Practice Question: How many engines, one or more than one, for multiple physical locations

2017-12-11 Thread Simone Tiraboschi
On Fri, Dec 8, 2017 at 9:50 PM, Matt Simonsen  wrote:

> Hello all,
>
> I read with Gluster using hyper-convergence that the engine must reside on
> the same LAN as the nodes. I guess this makes sense by definition - ie:
> using Gluster storage and replicating Gluster bricks across the web sounds
> awful.
>
> This got me wondering about best practices for the engine setup. We have
> multiple physical locations (co-location data centers).
>
> In my initial plan I had expected to have my oVirt engine hosted
> separately from each physical location so that in the event of trouble at a
> remote facility the engine would still be usable.
>
> In this case, our prod sites would not have a "hyper-converged" setup if
> we decide to run GlusterFS for storage at any particular physical site, but
> I believe it would still be possible to use Gluster. In this case oVirt
> would have a 3 node cluster, using GlusterFS storage, but not
> hyper-converged since the engine would be in a separate facility.
>
> Is there any downside in this setup to having the engine off-site?
>

This is called a stretched cluster setup. You have pro and cons, for
instance host fencing could become problematic.
VM leases could help:
https://ovirt.org/develop/release-management/features/storage/vm-leases/



>
> Rather than having an off-site engine, should I consider one engine per
> physical co-location space?
>

This would be simpler but you are going to loose a few capabilities that
can be relevant in a disaster recovery scenario.


>
> Thank you all for any feedback,
>
> Matt
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] bonding mode-alb

2017-12-11 Thread Gianluca Cecchi
On Mon, Dec 11, 2017 at 12:53 PM, Demeter Tibor  wrote:

> Hi,
>
> Could help anyone for me in this question?
> Thanks.
>
> R.
>
> Tibor
>
>
>
>
No personal experience, but I would stay with what offcial docs say
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-hosts_and_networking#sect-Bonds

As you are writing about NFS storage, mode 5 and 6 should be supported
(non-virtual machine (bridgeless) networks)

Do you have any pointer about no-go recommendations?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] qemu-kvm-ev-2.9.0-16.el7_4.11.1 now available for testing

2017-12-11 Thread Sandro Bonazzola
Hi, qemu-kvm-ev-2.9.0-16.el7_4.11.1
 is now available for
testing.
If no negative feedback will be reported, I'm going to push to release on
Thursday, December 14th.

Here's the changelog:

* Mon Dec 11 2017 Sandro Bonazzola  -
ev-2.9.0-16.el7_4.11.1 - Removing RH branding from package name * Mon Nov
13 2017 Miroslav Rezanina  - rhev-2.9.0-16.el7_4.11 -
kvm-exec-use-qemu_ram_ptr_length-to-access-guest-ram.patch [bz#1472185] -
kvm-multiboot-validate-multiboot-header-address-values.patch [bz#1501123] -
Resolves: bz#1472185 (CVE-2017-11334 qemu-kvm-rhev: Qemu: exec: oob access
during dma operation [rhel-7.4.z]) - Resolves: bz#1501123 (CVE-2017-14167
qemu-kvm-rhev: Qemu: i386: multiboot OOB access while loading kernel image
[rhel-7.4.z]) * Mon Oct 23 2017 Miroslav Rezanina  -
rhev-2.9.0-16.el7_4.10 -
kvm-vga-stop-passing-pointers-to-vga_draw_line-functions.patch [bz#1501300]
- kvm-vga-drop-line_offset-variable.patch [bz#1501300] -
kvm-vga-handle-cirrus-vbe-mode-wraparounds.patch [bz#1501300] -
kvm-cirrus-fix-oob-access-in-mode4and5-write-functions.patch [bz#1501300] -
Resolves: bz#1501300 (CVE-2017-15289 qemu-kvm-rhev: Qemu: cirrus: OOB
access issue in mode4and5 write functions [rhel-7.4.z]) * Mon Oct 09 2017
Miroslav Rezanina  - rhev-2.9.0-16.el7_4.9 -
kvm-nbd-client-Fix-regression-when-server-sends-garbage.patch [bz#1495474]
- kvm-fix-build-failure-in-nbd_read_reply_entry.patch [bz#1495474] -
kvm-nbd-client-avoid-spurious-qio_channel_yield-re-entry.patch [bz#1495474]
- kvm-nbd-client-avoid-read_reply_co-entry-if-send-failed.patch
[bz#1495474] -
kvm-qemu-iotests-improve-nbd-fault-injector.py-startup-p.patch [bz#1495474]
- kvm-qemu-iotests-test-NBD-over-UNIX-domain-sockets-in-08.patch
[bz#1495474] -
kvm-block-nbd-client-nbd_co_send_request-fix-return-code.patch [bz#1495474]
- Resolves: bz#1495474 (Fail to quit source qemu when do live migration
after mirroring guest to NBD server [rhel-7.4.z])

--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move disk between domains

2017-12-11 Thread Matthew DeBoer
The permissions are all ok. vdsm kvm.

This shows up in vdsm.log when the snapshot is tried.


2017-12-08 13:03:13,031-0600 ERROR (jsonrpc/7) [virt.vm]
(vmId='6a53d8a9-3b4d-4995-8b84-dc920badf0fc') Unable to take snapshot
(vm:3699)
Traceback (most recent call last):
 File "/usr/share/vdsm/virt/vm.py", line 3696, in snapshot
   self._dom.snapshotCreateXML(snapxml, snapFlags)
 File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69,
in f
   ret = attr(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
123, in wrapper
   ret = f(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1006, in
wrapper
   return func(inst, *args, **kwargs)
 File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2506, in
snapshotCreateXML
   if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed',
dom=self)
libvirtError: internal error: unable to execute QEMU command 'transaction':
Could not read L1 table: Input/output error



On Sun, Dec 10, 2017 at 6:34 AM, Maor Lipchuk  wrote:

> On Fri, Dec 8, 2017 at 8:01 PM, Matthew DeBoer 
> wrote:
>
>> When i try to move a specific disk between storage domains i get an error.
>>
>> 2017-12-08 11:26:05,257-06 ERROR [org.ovirt.engine.ui.frontend.
>> server.gwt.OvirtRemoteLoggingService] (default task-41) [] Permutation
>> name: 8C01181C3B121D0AAE1312275CC96415
>> 2017-12-08 11:26:05,257-06 ERROR [org.ovirt.engine.ui.frontend.
>> server.gwt.OvirtRemoteLoggingService] (default task-41) [] Uncaught
>> exception: com.google.gwt.core.client.JavaScriptException: (TypeError)
>> __gwt$exception: : Cannot read property 'F' of null
>>at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocati
>> onModel$3.$onSuccess(DisksAllocationModel.java:120)
>>at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocati
>> onModel$3.onSuccess(DisksAllocationModel.java:120)
>>at 
>> org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:233)
>> [frontend.jar:]
>>at 
>> org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:233)
>> [frontend.jar:]
>>at org.ovirt.engine.ui.frontend.communication.OperationProcesso
>> r$2.$onSuccess(OperationProcessor.java:139) [frontend.jar:]
>>at org.ovirt.engine.ui.frontend.communication.OperationProcesso
>> r$2.onSuccess(OperationProcessor.java:139) [frontend.jar:]
>>at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicati
>> onProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:269)
>> [frontend.jar:]
>>at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicati
>> onProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:269)
>> [frontend.jar:]
>>at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.o
>> nResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
>>at 
>> com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:237)
>> [gwt-servlet.jar:]
>>at 
>> com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
>> [gwt-servlet.jar:]
>>at Unknown.eval(webadmin-0.js@65)
>>at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296)
>> [gwt-servlet.jar:]
>>at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335)
>> [gwt-servlet.jar:]
>>at Unknown.eval(webadmin-0.js@54)
>>
>> All the other disks i can move.
>>
>> The issue here is how i got this storage domain into ovirt i think.
>>
>> I set up a new cluster using 4.1 coming from 3.6.
>>
>> I imported a domain from the 3.6 cluster. I am trying to move this disk
>> to one of the new storage domains on the 4.1 cluster.
>>
>
>>
>> Any help would be greatly appreciated
>>
>
>
> I would try to check the user permissions on that storage domain or the
> disk
>
> Regards,
> Maor
>
>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] qemu-kvm-ev-2.9.0-16.el7_4.11.1 now available for testing

2017-12-11 Thread Markus Stockhausen
Hi Sandro,

I'm wondering if BZ1513362 (AIO stuck fixed in qemu-kvm-rhev-2.9.0-16.el7_4.12)
will be worth to give the newer version a try.

Best regards.

Markus

-
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Sandro 
Bonazzola [sbona...@redhat.com]
Gesendet: Montag, 11. Dezember 2017 17:43
An: Discussion about the virtualization on CentOS; devel; users
Betreff: [ovirt-users] qemu-kvm-ev-2.9.0-16.el7_4.11.1 now available for testing

Hi, 
qemu-kvm-ev-2.9.0-16.el7_4.11.1
 is now available for testing.
If no negative feedback will be reported, I'm going to push to release on 
Thursday, December 14th.

Here's the changelog:

* Mon Dec 11 2017 Sandro Bonazzola 
> - ev-2.9.0-16.el7_4.11.1 - 
Removing RH branding from package name * Mon Nov 13 2017 Miroslav Rezanina 
> - rhev-2.9.0-16.el7_4.11 - 
kvm-exec-use-qemu_ram_ptr_length-to-access-guest-ram.patch [bz#1472185] - 
kvm-multiboot-validate-multiboot-header-address-values.patch [bz#1501123] - 
Resolves: bz#1472185 (CVE-2017-11334 qemu-kvm-rhev: Qemu: exec: oob access 
during dma operation [rhel-7.4.z]) - Resolves: bz#1501123 (CVE-2017-14167 
qemu-kvm-rhev: Qemu: i386: multiboot OOB access while loading kernel image 
[rhel-7.4.z]) * Mon Oct 23 2017 Miroslav Rezanina 
> - rhev-2.9.0-16.el7_4.10 - 
kvm-vga-stop-passing-pointers-to-vga_draw_line-functions.patch [bz#1501300] - 
kvm-vga-drop-line_offset-variable.patch [bz#1501300] - 
kvm-vga-handle-cirrus-vbe-mode-wraparounds.patch [bz#1501300] - 
kvm-cirrus-fix-oob-access-in-mode4and5-write-functions.patch [bz#1501300] - 
Resolves: bz#1501300 (CVE-2017-15289 qemu-kvm-rhev: Qemu: cirrus: OOB access 
issue in mode4and5 write functions [rhel-7.4.z]) * Mon Oct 09 2017 Miroslav 
Rezanina > - 
rhev-2.9.0-16.el7_4.9 - 
kvm-nbd-client-Fix-regression-when-server-sends-garbage.patch [bz#1495474] - 
kvm-fix-build-failure-in-nbd_read_reply_entry.patch [bz#1495474] - 
kvm-nbd-client-avoid-spurious-qio_channel_yield-re-entry.patch [bz#1495474] - 
kvm-nbd-client-avoid-read_reply_co-entry-if-send-failed.patch [bz#1495474] - 
kvm-qemu-iotests-improve-nbd-fault-injector.py-startup-p.patch [bz#1495474] - 
kvm-qemu-iotests-test-NBD-over-UNIX-domain-sockets-in-08.patch [bz#1495474] - 
kvm-block-nbd-client-nbd_co_send_request-fix-return-code.patch [bz#1495474] - 
Resolves: bz#1495474 (Fail to quit source qemu when do live migration after 
mirroring guest to NBD server [rhel-7.4.z])

--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA

[https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]
TRIED. TESTED. TRUSTED.



Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 10Gb Networking with OVN

2017-12-11 Thread Beau Sapach
Yes, we've done some testing.  With an oVirt VM running on a host using
10Gb ethernet copying data to a physical machine also using 10Gb ethernet
we don't see network utilization exceed 800Mbits or so.  A bit of research
online yields some experimentation done by others who used SR-IOV to
achieve 10Gb from a VM.

I'm not sure where the bottleneck is, possibly in the VirtIO driver.

Beau

On Mon, Dec 11, 2017 at 1:23 AM, Dominik Holler  wrote:

> Is there an indication that the VMs will not take advantage of 10Gb?
>
> On Thu, 7 Dec 2017 15:27:25 -0700
> Beau Sapach  wrote:
>
> > Hello everyone,
> >
> > I see here:
> > https://www.ovirt.org/blog/2017/09/introducing-ovirt-4.2.0/ that
> > version 4.2 will have OVN support.  Does anyone know if this will
> > allow VMs to take advantage of 10Gb networking without needing SR-IOV?
> >
> >
>
>


-- 
Beau Sapach
*System Administrator | Information Technology Services | University of
Alberta Libraries*
*Phone: 780.492.4181 | Email: beau.sap...@ualberta.ca
*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Black Screen for Spice/VNC after host reboot with a runnung VM

2017-12-11 Thread Eli ggg
Dear ovirt Team/Experts,

I did a reboot of a host, while a VM was running on it. After putting the host 
in maintenance and starting the VM on another host, I tried to connect to said 
VM by Spice/VNC. In both cases I get a black screen. The VM does not respond to 
keyboard input, its IP is not listed in the ovirt Webinterface. Cloning the VM 
from an old Snapshot does not result in it working
 - again black screen for Spice.

Could you help me?

Thanks



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Qemu-block] Scheduling daily Snapshot

2017-12-11 Thread Max Reitz
On 2017-12-07 23:19, Nir Soffer wrote:
> On Wed, Dec 6, 2017 at 6:02 PM Jason Lelievre  > wrote:
> 
> Hello,
> 
> What is the best way to set up a daily live snapshot for all VM, and
> have the possibility to recover, for example, a specific VM to a
> specific day?
> 
> 
> Each snapshot you create make reads and writes slower, as qemu has to
> lookup data through the entire chain.
> 
> When we take a snapshot, we create a new file (or block device) and make
> the new file the active layer of the chain.

I'm not sure how much this is going to slow you down exactly, but I can
tell you that there are also incremental backups to look into.

(e.g. https://wiki.qemu.org/Features/IncrementalBackup)

Max



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Qemu-block] Scheduling daily Snapshot

2017-12-11 Thread Nir Soffer
On Fri, Dec 8, 2017 at 5:25 PM Max Reitz  wrote:

> On 2017-12-07 23:19, Nir Soffer wrote:
> > On Wed, Dec 6, 2017 at 6:02 PM Jason Lelievre  > > wrote:
> >
> > Hello,
> >
> > What is the best way to set up a daily live snapshot for all VM, and
> > have the possibility to recover, for example, a specific VM to a
> > specific day?
> >
> >
> > Each snapshot you create make reads and writes slower, as qemu has to
> > lookup data through the entire chain.
> >
> > When we take a snapshot, we create a new file (or block device) and make
> > the new file the active layer of the chain.
>
> I'm not sure how much this is going to slow you down exactly, but I can
> tell you that there are also incremental backups to look into.
>
> (e.g. https://wiki.qemu.org/Features/IncrementalBackup)
>

checking, thanks!


>
> Max
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Qemu-block] import thin provisioned disks with image upload

2017-12-11 Thread Max Reitz
On 2017-12-08 21:56, Nir Soffer wrote:
> On Fri, Dec 8, 2017 at 5:23 PM Max Reitz  > wrote:

[...]

> What "stat" reports as "size" I'd call the length (what qemu-img info
> calls "virtual size" for raw images).  
> 
> 
> raw images are not an issue since the virtual size is the file size.

I personally don't quite see how the file length is important for other
image types.  At least the qcow2 driver doesn't really try to keep the
file length small because it generally assumes that it's just a number
that doesn't really matter.  (That was my impression, at least.)

> What I (and qemu-img info) call
> "size" is how much disk space is actually used.  And both ls -s and du
> agree that it's 0 bytes in this case.
> 
> By the way, yes, "stat" has a different definition of "size", so that
> seems wrong.  But I argue that the "ls" option is called "--size", so
> there's a conflict already and qemu-img info is not the first tool to
> disagree with stat on this.
> 
> 
> I think if you ask most users what is a file size they will tell you what
> stat is returning. If we use the term in the way most people expect
> we can make tools easier to use.

First of all, we cannot easily change the name from "size".  In the QAPI
definition of ImageInfo it's called "actual-size", so we cannot change
it without breaking compatibility.  We could change the human-readable
output, though; maybe to something like "host disk usage".

> Users of qemu-img have no way to tell what is disk-size, the value is
> not documented, at least not in the manual or online help.

Documentation is a different issue.  There is some for the actual-size
field of ImageInfo, but even that is indeed lacking.

> I think the easy way to improve this is to show both the "allocated size"
> (st_size * block_size), and the "file size" (st_size).

Not impossible, although I personally don't really see the point.

To be honest, what I fear is that people see the file length and
complain why it's so large when it's actually just some number that
shouldn't matter whatsoever.  If people really want to find out, I don't
think ls -l is that hard to use.

OK, so let me sum up:

First, I agree that "file size" is confusing.  (I disagree that it would
be wrong.  I think it is simply ambiguous.)  So we should rename it at
least in the human-readable output.

Secondly, we should have documentation about this in the qemu-img man
page, and better documentation for ImageInfo.actual-size.

Thirdly, I don't see the point of displaying the file length in qemu-img
info.  But since you think it useful and it probably wouldn't be too
hard, we can add it.  My only fear about this is that I consider it an
arbitrary and useless number that may confuse people.  It hopefully
shouldn't, as long as they can see the actual disk usage at the same
time, though.


(I just don't know how seeing the actual image file length in qemu-img
info would have helped in your case.  The important thing would have
been to know that image files usually do contain large holes and you
need to enable hole detection when transferring image files anywhere (as
far as I have seen).)

Max



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Qemu-block] import thin provisioned disks with image upload

2017-12-11 Thread Max Reitz
On 2017-12-07 21:33, Nir Soffer wrote:

[...]

> 
> Trying harder...
>  
> 
> The term "file size" is ambiguous in this context...
> 
> 
> It is not. file size is what you get from stat:
> 
> $ truncate -s 1g empty
> 
> $ stat empty 
>   File: 'empty'
>   Size: 1073741824Blocks: 0          IO Block: 4096   regular file
>   ...
> 
> $ qemu-img info empty
> image: empty
> file format: raw
> virtual size: 1.0G (1073741824 bytes)
> disk size: 0
> 
> The value "disk size" used by qemu-img is confusing and not useful
> when you want to transfer the file to another host.
> 
> I don't know why qemu-img display this value instead of the actual
> file size, adding qemu-block mailing list in case someone can explain
> this.

Because I still think it's ambiguous.

$ qemu-img create -f raw empty 1G
Formatting 'empty', fmt=raw size=1073741824
$ qemu-img info empty
image: empty
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: 0
$ LANG=C stat empty
  File: empty
  Size: 1073741824  Blocks: 0  IO Block: 4096   regular file
[...]
$ ls -s empty
0 empty
$ du -h empty
0   empty
$ ls -l empty
-rw-r--r--. 1 maxx maxx 1073741824  8. Dez 16:20 empty

What "stat" reports as "size" I'd call the length (what qemu-img info
calls "virtual size" for raw images).  What I (and qemu-img info) call
"size" is how much disk space is actually used.  And both ls -s and du
agree that it's 0 bytes in this case.

By the way, yes, "stat" has a different definition of "size", so that
seems wrong.  But I argue that the "ls" option is called "--size", so
there's a conflict already and qemu-img info is not the first tool to
disagree with stat on this.

Max



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Standalone Gluster Storage

2017-12-11 Thread Beau Sapach
We've been doing some experimenting with gluster, and have built a
stand-alone gluster cluster (not managed by oVirt).  We've been able to
create a storage domain backed by that gluster cluster and run VMs with
their disks on that storage.

The problem we have is that when we take a gluster node down for updates,
maintenance etc. the entire storage domain goes offline in oVirt.  Other
gluster clients, that is servers connecting directly to the gluster cluster
don't seem to notice if one node goes offline.

Is anyone else using gluster storage in oVirt that is not managed within
oVirt?

-- 
Beau Sapach
*System Administrator | Information Technology Services | University of
Alberta Libraries*
*Phone: 780.492.4181 | Email: beau.sap...@ualberta.ca
*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Error "Did not connect host to storage domain because connection for connectionId is null" in ovirt 4.1

2017-12-11 Thread Claude Durocher

I have a 4.1 ovirt environment. I cannot reactivate a storage domain 
(data-master) and I get an error message stating "connection for connectionId 
'b3011e5b-552e-4393-a758-ac1e35648ab1' is null". I also cannot delete this 
storage domain as it's a master domain.

2017-12-11 15:25:10,971-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(default task-22) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Lock Acquired to 
object 
'EngineLock:{exclusiveLocks='[5662588b-81d2-4da9-b942-8918004770fe=STORAGE]', 
sharedLocks=''}'
2017-12-11 15:25:10,999-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
Running command: ActivateStorageDomainCommand internal: false. Entities 
affected :  ID: 5662588b-81d2-4da9-b942-8918004770fe Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2017-12-11 15:25:11,002-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Lock 
freed to object 
'EngineLock:{exclusiveLocks='[5662588b-81d2-4da9-b942-8918004770fe=STORAGE]', 
sharedLocks=''}'
2017-12-11 15:25:11,002-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
ActivateStorage Domain. Before Connect all hosts to pool. Time: Mon Dec 11 
15:25:11 EST 2017
2017-12-11 15:25:11,015-05 WARN  
[org.ovirt.engine.core.bll.storage.connection.BaseFsStorageHelper] 
(org.ovirt.thread.pool-6-thread-43) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Did 
not connect host 'e6dbdeb8-4e0e-4589-9121-6d3408c6d7b0' to storage domain 
'ovirt-lg-1-lun1' because connection for connectionId 
'b3011e5b-552e-4393-a758-ac1e35648ab1' is null.
2017-12-11 15:25:11,016-05 ERROR 
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
Cannot connect storage server, aborting Storage Domain activation.
2017-12-11 15:25:11,017-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
Command [id=0fb1fa9d-4002-4fe0-9af2-d30470d5f146]: Compensating 
CHANGED_STATUS_ONLY of 
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: 
EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='a0cc9e2c-6bff-4d5b-803c-0cd62292c269',
 storageId='5662588b-81d2-4da9-b942-8918004770fe'}', status='Maintenance'}.
2017-12-11 15:25:11,023-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
EVENT_ID: USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Correlation ID: 
49dfce30-0e2c-43f8-942d-67a2b56b3ef8, Job ID: 
2d766e2a-4b35-4822-9580-4eb9df2d9c33, Call Stack: null, Custom ID: null, Custom 
Event ID: -1, Message: Failed to activate Storage Domain ovirt-lg-1-lun1 (Data 
Center ovirt-lg-1) by admin@internal-authz


select id,storage,storage_name,_update_date from storage_domain_static;
  id  |    storage 
|  storage_name  | _update_date  
--+++---
 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ceab03af-7220-4d42-8f5c-9b557f5d29af   
| ovirt-image-repository |
 b66443f4-d949-4fca-9825-8cb98eae9e14 | e462883f-0be1-4b31-acf1-154dd7accbc1   
| export |
 45807651-8f10-408f-891b-64f1cc577a64 | 452235ea-d8b1-43d1-920f-a16a032e000b   
| ovirt-lg-1-iso | 2017-01-20 20:01:00.284097+00
 a1d803b9-fd0b-477f-a874-53d050d4347b | cd8939df-9cdd-49d3-8dc5-3c06d1836399   
| ovirt-lg-1-export  | 2017-01-20 20:01:14.70549+00
 b67e7442-f032-4b5c-a4fe-10422650a90b | c6272852-3586-4584-8496-d97c4370c798   
| ovirt-5-iso    | 2017-01-20 20:02:08.668034+00
 a561589c-8eb8-4823-9615-92ac4a1ea94e | cacb8801-6826-42cb-9bab-5552175e0329   
| ovirt-lg-2-export  |
 646b331a-b68e-4894-807f-bdc8adae15c9 | a592b780-7bd6-4599-ab7e-12c43cb9279d   
| ovirt-lg-2-iso |
 3ba53a0b-30be-497c-b4df-880e6c6f7567 | 8e5236c9-cbc0-4256-abff-7e984abef65a   
| master | 2017-12-11 02:12:16.499288+00
 3efa5ed5-4e17-4daf-9cc0-6819a9cd7aae | 8d19b66d-1c1c-4976-91d2-8d2565db367a   
| hosted_storage | 2017-12-11 02:12:16.499288+00
 18504f40-fb15-4b1e-8ac3-ec0eabdc5619 | SPA1ZX-n7rh-s14h-wuRC-071z-RShH-wmzmqj 
| storage-1-lun0 | 2017-12-11 02:12:16.499288+00
 730c14db-d4ef-4d0e-9670-419b803fe634 | 7Zxqu3-nVUD-HnN7-w4Xf-TFi7-BKrJ-OzMyRA 
| storage-3-lun0 | 2017-12-11 02:12:16.499288+00
 190bc841-e6b0-45b8-bfbb-7518735d8592 | y7oM6F-Tx0Y-QWxg-gk9R-Xlfm-Mz9A-fOx5RN 
| storage-2-lun0 | 

Re: [ovirt-users] ?==?utf-8?q? Error "Did not connect host to storage domain because connection for connectionId is null" in ovirt 4.1

2017-12-11 Thread Claude Durocher

Replying to myself here: I fixed it by running this insert statement in the 
postgresql db to restore the missing connection:

insert into storage_server_connections (id, connection, portal, storage_type) 
values ('b3011e5b-552e-4393-a758-ac1e35648ab1', '/var/ovirt-lun1', '1', '4');



Le Lundi, Décembre 11, 2017 16:29 EST, "Claude Durocher" 
 a écrit:
  I have a 4.1 ovirt environment. I cannot reactivate a storage domain 
(data-master) and I get an error message stating "connection for connectionId 
'b3011e5b-552e-4393-a758-ac1e35648ab1' is null". I also cannot delete this 
storage domain as it's a master domain.

2017-12-11 15:25:10,971-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(default task-22) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Lock Acquired to 
object 
'EngineLock:{exclusiveLocks='[5662588b-81d2-4da9-b942-8918004770fe=STORAGE]', 
sharedLocks=''}'
2017-12-11 15:25:10,999-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
Running command: ActivateStorageDomainCommand internal: false. Entities 
affected :  ID: 5662588b-81d2-4da9-b942-8918004770fe Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2017-12-11 15:25:11,002-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Lock 
freed to object 
'EngineLock:{exclusiveLocks='[5662588b-81d2-4da9-b942-8918004770fe=STORAGE]', 
sharedLocks=''}'
2017-12-11 15:25:11,002-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
ActivateStorage Domain. Before Connect all hosts to pool. Time: Mon Dec 11 
15:25:11 EST 2017
2017-12-11 15:25:11,015-05 WARN  
[org.ovirt.engine.core.bll.storage.connection.BaseFsStorageHelper] 
(org.ovirt.thread.pool-6-thread-43) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Did 
not connect host 'e6dbdeb8-4e0e-4589-9121-6d3408c6d7b0' to storage domain 
'ovirt-lg-1-lun1' because connection for connectionId 
'b3011e5b-552e-4393-a758-ac1e35648ab1' is null.
2017-12-11 15:25:11,016-05 ERROR 
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
Cannot connect storage server, aborting Storage Domain activation.
2017-12-11 15:25:11,017-05 INFO  
[org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
Command [id=0fb1fa9d-4002-4fe0-9af2-d30470d5f146]: Compensating 
CHANGED_STATUS_ONLY of 
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: 
EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='a0cc9e2c-6bff-4d5b-803c-0cd62292c269',
 storageId='5662588b-81d2-4da9-b942-8918004770fe'}', status='Maintenance'}.
2017-12-11 15:25:11,023-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-50) [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] 
EVENT_ID: USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Correlation ID: 
49dfce30-0e2c-43f8-942d-67a2b56b3ef8, Job ID: 
2d766e2a-4b35-4822-9580-4eb9df2d9c33, Call Stack: null, Custom ID: null, Custom 
Event ID: -1, Message: Failed to activate Storage Domain ovirt-lg-1-lun1 (Data 
Center ovirt-lg-1) by admin@internal-authz


select id,storage,storage_name,_update_date from storage_domain_static;
  id  |    storage 
|  storage_name  | _update_date  
--+++---
 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ceab03af-7220-4d42-8f5c-9b557f5d29af   
| ovirt-image-repository |
 b66443f4-d949-4fca-9825-8cb98eae9e14 | e462883f-0be1-4b31-acf1-154dd7accbc1   
| export |
 45807651-8f10-408f-891b-64f1cc577a64 | 452235ea-d8b1-43d1-920f-a16a032e000b   
| ovirt-lg-1-iso | 2017-01-20 20:01:00.284097+00
 a1d803b9-fd0b-477f-a874-53d050d4347b | cd8939df-9cdd-49d3-8dc5-3c06d1836399   
| ovirt-lg-1-export  | 2017-01-20 20:01:14.70549+00
 b67e7442-f032-4b5c-a4fe-10422650a90b | c6272852-3586-4584-8496-d97c4370c798   
| ovirt-5-iso    | 2017-01-20 20:02:08.668034+00
 a561589c-8eb8-4823-9615-92ac4a1ea94e | cacb8801-6826-42cb-9bab-5552175e0329   
| ovirt-lg-2-export  |
 646b331a-b68e-4894-807f-bdc8adae15c9 | a592b780-7bd6-4599-ab7e-12c43cb9279d   
| ovirt-lg-2-iso |
 3ba53a0b-30be-497c-b4df-880e6c6f7567 | 8e5236c9-cbc0-4256-abff-7e984abef65a   
| master | 2017-12-11 02:12:16.499288+00
 3efa5ed5-4e17-4daf-9cc0-6819a9cd7aae | 8d19b66d-1c1c-4976-91d2-8d2565db367a   
| hosted_storage | 2017-12-11 

[ovirt-users] Export Domain no show

2017-12-11 Thread Rajat Patel
Hi Ovirt,

We are using Ovirt 4.1 selfhosted engine attach nfs storega as
data/iso/export, we have one image which we want to import
(manageiq-ovirt-fine-4.qc2). We did copy to our export location
(/export/3157c57b-8f6a-4709-862a-713bfa59899a) and chnage the owner ship to
(chown -R 36:36 manageiq-ovirt-fine-4.qc2)issues is we are not able to see
at ovirtUI-->>Strorage-->>export-->>VM Import nither at Template Import. At
the same time not able to see error at logs.

Regards
Techieim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 10Gb Networking with OVN

2017-12-11 Thread Dominik Holler
Thanks for the hint, we are going to check this.
You are welcome to share if you have any hints about a possible
bottleneck in you scenario.

On Mon, 11 Dec 2017 11:21:58 -0700
Beau Sapach  wrote:

> Yes, we've done some testing.  With an oVirt VM running on a host
> using 10Gb ethernet copying data to a physical machine also using
> 10Gb ethernet we don't see network utilization exceed 800Mbits or
> so.  A bit of research online yields some experimentation done by
> others who used SR-IOV to achieve 10Gb from a VM.
> 
> I'm not sure where the bottleneck is, possibly in the VirtIO driver.
> 
> Beau
> 
> On Mon, Dec 11, 2017 at 1:23 AM, Dominik Holler 
> wrote:
> 
> > Is there an indication that the VMs will not take advantage of 10Gb?
> >
> > On Thu, 7 Dec 2017 15:27:25 -0700
> > Beau Sapach  wrote:
> >  
> > > Hello everyone,
> > >
> > > I see here:
> > > https://www.ovirt.org/blog/2017/09/introducing-ovirt-4.2.0/ that
> > > version 4.2 will have OVN support.  Does anyone know if this will
> > > allow VMs to take advantage of 10Gb networking without needing
> > > SR-IOV?
> > >
> > >  
> >
> >  
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move disk between domains

2017-12-11 Thread Maor Lipchuk
On Mon, Dec 11, 2017 at 6:30 PM, Matthew DeBoer 
wrote:

> The permissions are all ok. vdsm kvm.
>
> This shows up in vdsm.log when the snapshot is tried.
>
>
> 2017-12-08 13:03:13,031-0600 ERROR (jsonrpc/7) [virt.vm]
> (vmId='6a53d8a9-3b4d-4995-8b84-dc920badf0fc') Unable to take snapshot
> (vm:3699)
> Traceback (most recent call last):
>  File "/usr/share/vdsm/virt/vm.py", line 3696, in snapshot
>self._dom.snapshotCreateXML(snapxml, snapFlags)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69,
> in f
>ret = attr(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
> 123, in wrapper
>ret = f(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1006, in
> wrapper
>return func(inst, *args, **kwargs)
>  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2506, in
> snapshotCreateXML
>if ret is None:raise libvirtError('virDomainSnapshotCreateXML()
> failed', dom=self)
> libvirtError: internal error: unable to execute QEMU command
> 'transaction': Could not read L1 table: Input/output error
>


This looks like a qcow issue, I would try to send this to the qemu discuss
list at https://lists.nongnu.org/mailman/listinfo/qemu-discuss
Are you using Gluster? I encountered a discussion there which has a similar
error although it was related to Gluster:
  https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg04742.html


>
>
> On Sun, Dec 10, 2017 at 6:34 AM, Maor Lipchuk  wrote:
>
>> On Fri, Dec 8, 2017 at 8:01 PM, Matthew DeBoer > > wrote:
>>
>>> When i try to move a specific disk between storage domains i get an
>>> error.
>>>
>>> 2017-12-08 11:26:05,257-06 ERROR [org.ovirt.engine.ui.frontend.
>>> server.gwt.OvirtRemoteLoggingService] (default task-41) [] Permutation
>>> name: 8C01181C3B121D0AAE1312275CC96415
>>> 2017-12-08 11:26:05,257-06 ERROR [org.ovirt.engine.ui.frontend.
>>> server.gwt.OvirtRemoteLoggingService] (default task-41) [] Uncaught
>>> exception: com.google.gwt.core.client.JavaScriptException: (TypeError)
>>> __gwt$exception: : Cannot read property 'F' of null
>>>at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocati
>>> onModel$3.$onSuccess(DisksAllocationModel.java:120)
>>>at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocati
>>> onModel$3.onSuccess(DisksAllocationModel.java:120)
>>>at 
>>> org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:233)
>>> [frontend.jar:]
>>>at 
>>> org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:233)
>>> [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.OperationProcesso
>>> r$2.$onSuccess(OperationProcessor.java:139) [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.OperationProcesso
>>> r$2.onSuccess(OperationProcessor.java:139) [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicati
>>> onProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:269)
>>> [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicati
>>> onProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:269)
>>> [frontend.jar:]
>>>at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.o
>>> nResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
>>>at 
>>> com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:237)
>>> [gwt-servlet.jar:]
>>>at 
>>> com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
>>> [gwt-servlet.jar:]
>>>at Unknown.eval(webadmin-0.js@65)
>>>at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296)
>>> [gwt-servlet.jar:]
>>>at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335)
>>> [gwt-servlet.jar:]
>>>at Unknown.eval(webadmin-0.js@54)
>>>
>>> All the other disks i can move.
>>>
>>> The issue here is how i got this storage domain into ovirt i think.
>>>
>>> I set up a new cluster using 4.1 coming from 3.6.
>>>
>>> I imported a domain from the 3.6 cluster. I am trying to move this disk
>>> to one of the new storage domains on the 4.1 cluster.
>>>
>>
>>>
>>> Any help would be greatly appreciated
>>>
>>
>>
>> I would try to check the user permissions on that storage domain or the
>> disk
>>
>> Regards,
>> Maor
>>
>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-engine installation problem

2017-12-11 Thread David David
Hello.

CentOS Linux release 7.4.1708 (Core)



# yum install http://resources.ovirt.org/pub/yum-repo ... ease41.rpm


# yum install ovirt-engine

--> Finished Dependency Resolution
Error: Package: systemd-python-219-42.el7.x86_64 (base)
   Requires: systemd = 219-42.el7
   Installed: systemd-219-42.el7_4.4.x86_64 (@updates)
   systemd = 219-42.el7_4.4
   Available: systemd-219-42.el7.x86_64 (base)
   systemd = 219-42.el7
Error: Package: glibc-2.17-196.el7.i686 (base)
   Requires: glibc-common = 2.17-196.el7
   Installed: glibc-common-2.17-196.el7_4.2.x86_64 (@updates)
   glibc-common = 2.17-196.el7_4.2
   Available: glibc-common-2.17-196.el7.x86_64 (base)
   glibc-common = 2.17-196.el7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

# yum clean all
# yum update

 - don't help.

how to fix it ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users