[ovirt-users] Re: oVirt on a Single Server

2020-01-20 Thread Tony Brian Albers
On Tue, 2020-01-21 at 07:35 +, webma...@hotmail.com wrote:
> Hello,
> 
> I can't seem to install the self-hosted engine onto local storage. It
> gives me glustefs, iscsi, fc, and nfs as the available options. I'm
> using this in a home-lab scenario, and don't have budget/etc. for
> building out a dedicated NAS for it, or setting up multiple nodes. I
> like the look of oVirt, and wanted to try it with a couple disposable
> vm's (plex, and a docker instance I break often). My current best-
> thought for how to make it work is to setup NFS on the server, and
> then point the self-hosted engine at the (local) NFS share. Is there
> a better way to do this that I might be overlooking?*
> 
> *Factoring that I don't have the funds to build out a proper storage
> environment, yet.
> 
> (and if anyone asks, I did search for a solution to this, but didn't
> find anything super helpful. Mostly I found 5+ year old articles on a
> similar but different scenario).
> 

Well, if you can live with a regular engine(not self-hosted), this
works:

https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html


HTH

/tony







___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NT2D5DZWGFOM3MEZZNQ4K3QERITKGN2Y/


[ovirt-users] oVirt on a Single Server

2020-01-20 Thread webmattr
Hello,

I can't seem to install the self-hosted engine onto local storage. It gives me 
glustefs, iscsi, fc, and nfs as the available options. I'm using this in a 
home-lab scenario, and don't have budget/etc. for building out a dedicated NAS 
for it, or setting up multiple nodes. I like the look of oVirt, and wanted to 
try it with a couple disposable vm's (plex, and a docker instance I break 
often). My current best-thought for how to make it work is to setup NFS on the 
server, and then point the self-hosted engine at the (local) NFS share. Is 
there a better way to do this that I might be overlooking?*

*Factoring that I don't have the funds to build out a proper storage 
environment, yet.

(and if anyone asks, I did search for a solution to this, but didn't find 
anything super helpful. Mostly I found 5+ year old articles on a similar but 
different scenario).
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OB5AWOMHJZVR5INCPE37YAOVPWMECT2Y/


[ovirt-users] Re: VM migrations stalling over migration-only network

2020-01-20 Thread Ben
The log from the start of the migration:

2021-01-20 20:27:11,027-0500 INFO  (jsonrpc/5) [api.virt] START
migrate(params={u'incomingLimit': 2, u'src': u'vhost2.my.domain.name',
u'dstqemu': u'10.0.20.100', u'autoConverge': u'true', u'tunneled':
u'false', u'enableGuestEvents': True, u'dst': u'vhost1.my.domain.name:54321',
u'convergenceSchedule': {u'init': [{u'params': [u'100'], u'name':
u'setDowntime'}], u'stalling': [{u'action': {u'params': [u'150'], u'name':
u'setDowntime'}, u'limit': 1}, {u'action': {u'params': [u'200'], u'name':
u'setDowntime'}, u'limit': 2}, {u'action': {u'params': [u'300'], u'name':
u'setDowntime'}, u'limit': 3}, {u'action': {u'params': [u'400'], u'name':
u'setDowntime'}, u'limit': 4}, {u'action': {u'params': [u'500'], u'name':
u'setDowntime'}, u'limit': 6}, {u'action': {u'params': [], u'name':
u'abort'}, u'limit': -1}]}, u'vmId':
u'a24fd7e3-161c-451e-8880-b3e7e1f7d86f', u'abortOnError': u'true',
u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 125,
u'method': u'online'}) from=:::10.0.0.20,40308,
flow_id=fc4e0792-a3a0-425f-b0b6-bcaf5e0f4775,
vmId=a24fd7e3-161c-451e-8880-b3e7e1f7d86f (api:48)
2021-01-20 20:27:11,027-0500 INFO  (jsonrpc/5) [api.virt] START
migrate(params={u'incomingLimit': 2, u'src': u'vhost2.my.domain.name',
u'dstqemu': u'10.0.20.100', u'autoConverge': u'true', u'tunneled':
u'false', u'enableGuestEvents': True, u'dst': u'vhost1.my.domain.name:54321',
u'convergenceSchedule': {u'init': [{u'params': [u'100'], u'name':
u'setDowntime'}], u'stalling': [{u'action': {u'params': [u'150'], u'name':
u'setDowntime'}, u'limit': 1}, {u'action': {u'params': [u'200'], u'name':
u'setDowntime'}, u'limit': 2}, {u'action': {u'params': [u'300'], u'name':
u'setDowntime'}, u'limit': 3}, {u'action': {u'params': [u'400'], u'name':
u'setDowntime'}, u'limit': 4}, {u'action': {u'params': [u'500'], u'name':
u'setDowntime'}, u'limit': 6}, {u'action': {u'params': [], u'name':
u'abort'}, u'limit': -1}]}, u'vmId':
u'a24fd7e3-161c-451e-8880-b3e7e1f7d86f', u'abortOnError': u'true',
u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 125,
u'method': u'online'}) from=:::10.0.0.20,40308,
flow_id=fc4e0792-a3a0-425f-b0b6-bcaf5e0f4775,
vmId=a24fd7e3-161c-451e-8880-b3e7e1f7d86f (api:48)
2020-01-20 20:27:13,367-0500 INFO  (migsrc/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Creation of destination VM
took: 2 seconds (migration:469)
2020-01-20 20:27:13,367-0500 INFO  (migsrc/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') starting migration to
qemu+tls://vhost1.my.domain.name/system with miguri tcp://10.0.20.100
(migration:498)

That appears to all be in order, as 10.0.20.100 is the correct IP address
of the migration interface on the destination host.


The netcat also looks good:

[root@vhost2 ~]# nc -vz 10.0.20.100 54321
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.0.20.100:54321.
Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds.


However, the final test is very telling:

[root@vhost2 ~]# ping -M do -s $((9000 - 28)) 10.0.20.100
PING 10.0.20.100 (10.0.20.100) 8972(9000) bytes of data.
^C
--- 10.0.20.100 ping statistics ---
14 packets transmitted, 0 received, 100% packet loss, time 12999ms

I don't think my switch is handling the MTU setting, even though it is
configured to do so. I will have to investigate further.

-Ben


On Mon, Jan 20, 2020 at 3:54 PM Dominik Holler  wrote:

>
>
> On Mon, Jan 20, 2020 at 2:34 PM Milan Zamazal  wrote:
>
>> Ben  writes:
>>
>> > Hi Milan,
>> >
>> > Thanks for your reply. I checked the firewall, and saw that both the
>> bond0
>> > interface and the VLAN interface bond0.20 had been added to the default
>> > zone, which I believe should provide the necessary firewall access
>> (output
>> > below)
>> >
>> > I double-checked the destination host's VDSM logs and wasn't able to
>> find
>> > any warning or error-level logs during the migration timeframe.
>> >
>> > I checked the migration_port_* and *_port settings in qemu.conf and
>> > libvirtd.conf and all lines are commented. I have not modified either
>> file.
>>
>> The commented out settings define the default port used for migrations,
>> so they are valid even when commented out.  I can see you have
>> libvirt-tls open below, not sure about the QEMU ports.  If migration
>> works when not using a separate migration network then it should work
>> with the same rules for the migration network, so I think your settings
>> are OK.
>>
>> The fact that you don't get any better explanation than "unexpectedly
>> failed" and that it fails before transferring any data indicates a
>> possible networking error, but I can't help with that, someone with
>> networking knowledge should.
>>
>>
> Can you please share the relevant lines which logs the start of the
> migration on the source host from vdsm.log?
> This line should contain the IP address on migration network of the
> destination host.
> Please note that there are two network connections: 

[ovirt-users] Re: VM migrations stalling over migration-only network

2020-01-20 Thread Ben
Hi Milan,

Thanks for your reply. I checked the firewall, and saw that both the bond0
interface and the VLAN interface bond0.20 had been added to the default
zone, which I believe should provide the necessary firewall access (output
below)

I double-checked the destination host's VDSM logs and wasn't able to find
any warning or error-level logs during the migration timeframe.

I checked the migration_port_* and *_port settings in qemu.conf and
libvirtd.conf and all lines are commented. I have not modified either file.

[root@vhost2 vdsm]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: bond0 bond0.20 em1 em2 migration ovirtmgmt p1p1
  sources:
  services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-vmconsole
snmp ssh vdsm
  ports: 1311/tcp 22/tcp 6081/udp 5666/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

On Mon, Jan 20, 2020 at 6:29 AM Milan Zamazal  wrote:

> Ben  writes:
>
> > Hi, I'm pretty stuck at the moment so I hope someone can help me.
> >
> > I have an oVirt 4.3 data center with two hosts. Recently, I attempted to
> > segregate migration traffic from the the standard ovirtmgmt network,
> where
> > the VM traffic and all other traffic resides.
> >
> > I set up the VLAN on my router and switch, and created LACP bonds on both
> > hosts, tagging them with the VLAN ID. I confirmed the routes work fine,
> and
> > traffic speeds are as expected. MTU is set to 9000.
> >
> > After configuring the migration network in the cluster and dragging and
> > dropping it onto the bonds on each host, VMs fail to migrate.
> >
> > oVirt is not reporting any issues with the network interfaces or sync
> with
> > the hosts. However, when I attempt to live-migrate a VM, progress gets to
> > 1% and stalls. The transfer rate is 0Mbps, and the operation eventually
> > fails.
> >
> > I have not been able to identify anything useful in the VDSM logs on the
> > source or destination hosts, or in the engine logs. It repeats the below
> > WARNING and INFO logs for the duration of the process, then logs the last
> > entries when it fails. I can provide more logs if it would help. I'm not
> > even sure where to start -- since I am a novice at networking, at best,
> my
> > suspicion the entire time was that something is misconfigured in my
> > network. However, the routes are good, speed tests are fine, and I can't
> > find anything else wrong with the connections. It's not impacting any
> other
> > traffic over the bond interfaces.
> >
> > Are there other requirements that must be met for VMs to migrate over a
> > separate interface/network?
>
> Hi, did you check your firewall settings?  Are the required ports open?
> See migration_port_* options in /etc/libvirt/qemu.conf and *_port
> options in /etc/libvirt/libvirtd.conf.
>
> Is there any error reported in the destination vdsm.log?
>
> Regards,
> Milan
>
> > 2020-01-12 03:18:28,245-0500 WARN  (migmon/a24fd7e3) [virt.vm]
> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling:
> remaining
> > (4191MiB) > lowmark (4191MiB). (migration:854)
> > 2020-01-12 03:18:28,245-0500 INFO  (migmon/a24fd7e3) [virt.vm]
> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration Progress: 930.341
> > seconds elapsed, 1% of data processed, total data: 4192MB, processed
> data:
> > 0MB, remaining data: 4191MB, transfer speed 0MBps, zero pages: 149MB,
> > compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:881)
> > 2020-01-12 03:18:31,386-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') operation failed: migration
> > out job: unexpectedly failed (migration:282)
> > 2020-01-12 03:18:32,695-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Failed to migrate
> > (migration:450)
> >   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 431,
> > in _regular_run
> > time.time(), migrationParams, machineParams
> >   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 505,
> > in _startUnderlyingMigration
> >   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 591,
> > in _perform_with_conv_schedule
> > self._perform_migration(duri, muri)
> >   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 525,
> > in _perform_migration
> > self._migration_flags)
> > libvirtError: operation failed: migration out job: unexpectedly failed
> > 2020-01-12 03:18:40,880-0500 INFO  (jsonrpc/6) [api.virt] FINISH
> > getMigrationStatus return={'status': {'message': 'Done', 'code': 0},
> > 'migrationStats': {'status': {'message': 'Fatal error during migration',
> > 'code': 12}, 'progress': 1L}} from=:::10.0.0.20,41462,
> > vmId=a24fd7e3-161c-451e-8880-b3e7e1f7d86f (api:54)
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> 

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
Hi Strahil,

yes it is a replica 4 set
I ve tried to stop and stop every gluster server,
and Ive rebooted every server.

or should I remove the brick and add it again?

bye
stefan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLZ7NIGVYB33PXA3ZLJBK4GDNSSFWJHU/


[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Strahil Nikolov
On January 20, 2020 8:15:03 PM GMT+02:00, Stefan Wolf  wrote:
>yes, I ve already tried a full heal a week a go.
>
>how do i perform a manual heal?
>
>I only have this gfid:
>
>
>
>
>
>
>
>
>
>
>
>Status: Connected
>Number of entries: 868
>
>I ve tried to heal it with:
>[root@kvm10 ~]# gluster volume heal data split-brain latest-mtime
>gfid:c2b47c5c-89b6-49ac-bf10-1733dd8f0902
>Healing gfid:c2b47c5c-89b6-49ac-bf10-1733dd8f0902 failed: File not in
>split-brain.
>Volume heal failed.
>
>(the last entry)
>
>And if i understood it correct ther is no split-brain
>
>[root@kvm10 ~]# gluster volume heal data info split-brain
>Brick kvm10:/gluster_bricks/data
>Status: Connected
>Number of entries in split-brain: 0
>
>Brick kvm320.durchhalten.intern:/gluster_bricks/data
>Status: Connected
>Number of entries in split-brain: 0
>
>Brick kvm360.durchhalten.intern:/gluster_bricks/data
>Status: Connected
>Number of entries in split-brain: 0
>
>Brick kvm380.durchhalten.intern:/gluster_bricks/data
>Status: Connected
>Number of entries in split-brain: 0
>
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQMSKCYGKKKUZIRXSKBW2VWWPLYVEX7A/

Hi Stefan,

Dis you recently reboot/stopped gluster  on kvm320.durchhalten.intern ?
Also, what kind of volume is that - replica 4 ?

If indeed the kvm320 was rebooted, my guess is that you can rsync the files 
from a good brick to this one and then run a full heal , so gluster will get 
notified. Don't rsync the whole brick, just the folders that contain the 
'needs-healing' files.


Of course, it's better to address  it in the gluster user's forum, as many dev 
are watching it and might help with the resolution (if it is a bug) .

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZRS6RKQET5QSSLPQWXZYLVOKLGNGDDK/


[ovirt-users] Re: VM migrations stalling over migration-only network

2020-01-20 Thread Dominik Holler
On Mon, Jan 20, 2020 at 2:34 PM Milan Zamazal  wrote:

> Ben  writes:
>
> > Hi Milan,
> >
> > Thanks for your reply. I checked the firewall, and saw that both the
> bond0
> > interface and the VLAN interface bond0.20 had been added to the default
> > zone, which I believe should provide the necessary firewall access
> (output
> > below)
> >
> > I double-checked the destination host's VDSM logs and wasn't able to find
> > any warning or error-level logs during the migration timeframe.
> >
> > I checked the migration_port_* and *_port settings in qemu.conf and
> > libvirtd.conf and all lines are commented. I have not modified either
> file.
>
> The commented out settings define the default port used for migrations,
> so they are valid even when commented out.  I can see you have
> libvirt-tls open below, not sure about the QEMU ports.  If migration
> works when not using a separate migration network then it should work
> with the same rules for the migration network, so I think your settings
> are OK.
>
> The fact that you don't get any better explanation than "unexpectedly
> failed" and that it fails before transferring any data indicates a
> possible networking error, but I can't help with that, someone with
> networking knowledge should.
>
>
Can you please share the relevant lines which logs the start of the
migration on the source host from vdsm.log?
This line should contain the IP address on migration network of the
destination host.
Please note that there are two network connections: the libvirt's control
data is transmitted on the management
network encrypted, while the qemu's data is transmitted on the migration
network.
On source host, can you please
ping -M do -s $((9000 - 28))
dest_ip_address_on_migration_network_from_vdsm_log
nc -vz dest_ip_address_on_migration_network_from_vdsm_log
dest_port_on_migration_network_from_vdsm_log




> You can also try to enable libvirt debugging on both the sides in
> /etc/libvirt/libvirtd.conf and restart libvirt (beware, those logs are
> huge).  libvirt logs should report some error.
>
> > [root@vhost2 vdsm]# firewall-cmd --list-all
> > public (active)
> >   target: default
> >   icmp-block-inversion: no
> >   interfaces: bond0 bond0.20 em1 em2 migration ovirtmgmt p1p1
> >   sources:
> >   services: cockpit dhcpv6-client libvirt-tls ovirt-imageio
> ovirt-vmconsole
> > snmp ssh vdsm
> >   ports: 1311/tcp 22/tcp 6081/udp 5666/tcp
> >   protocols:
> >   masquerade: no
> >   forward-ports:
> >   source-ports:
> >   icmp-blocks:
> >   rich rules:
> >
> > On Mon, Jan 20, 2020 at 6:29 AM Milan Zamazal 
> wrote:
> >
> >> Ben  writes:
> >>
> >> > Hi, I'm pretty stuck at the moment so I hope someone can help me.
> >> >
> >> > I have an oVirt 4.3 data center with two hosts. Recently, I attempted
> to
> >> > segregate migration traffic from the the standard ovirtmgmt network,
> >> where
> >> > the VM traffic and all other traffic resides.
> >> >
> >> > I set up the VLAN on my router and switch, and created LACP bonds on
> both
> >> > hosts, tagging them with the VLAN ID. I confirmed the routes work
> fine,
> >> and
> >> > traffic speeds are as expected. MTU is set to 9000.
> >> >
> >> > After configuring the migration network in the cluster and dragging
> and
> >> > dropping it onto the bonds on each host, VMs fail to migrate.
> >> >
> >> > oVirt is not reporting any issues with the network interfaces or sync
> >> with
> >> > the hosts. However, when I attempt to live-migrate a VM, progress
> gets to
> >> > 1% and stalls. The transfer rate is 0Mbps, and the operation
> eventually
> >> > fails.
> >> >
> >> > I have not been able to identify anything useful in the VDSM logs on
> the
> >> > source or destination hosts, or in the engine logs. It repeats the
> below
> >> > WARNING and INFO logs for the duration of the process, then logs the
> last
> >> > entries when it fails. I can provide more logs if it would help. I'm
> not
> >> > even sure where to start -- since I am a novice at networking, at
> best,
> >> my
> >> > suspicion the entire time was that something is misconfigured in my
> >> > network. However, the routes are good, speed tests are fine, and I
> can't
> >> > find anything else wrong with the connections. It's not impacting any
> >> other
> >> > traffic over the bond interfaces.
> >> >
> >> > Are there other requirements that must be met for VMs to migrate over
> a
> >> > separate interface/network?
> >>
> >> Hi, did you check your firewall settings?  Are the required ports open?
> >> See migration_port_* options in /etc/libvirt/qemu.conf and *_port
> >> options in /etc/libvirt/libvirtd.conf.
> >>
> >> Is there any error reported in the destination vdsm.log?
> >>
> >> Regards,
> >> Milan
> >>
> >> > 2020-01-12 03:18:28,245-0500 WARN  (migmon/a24fd7e3) [virt.vm]
> >> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling:
> >> remaining
> >> > (4191MiB) > lowmark (4191MiB). (migration:854)
> >> > 2020-01-12 03:18:28,245-0500 INFO  (migmon/a24fd7e3) 

[ovirt-users] Re: Ovirt backup

2020-01-20 Thread Jayme
Looking at oVirt ansible roles, I wonder if it would be easy to implement
VM backups using the ovirt-snapshot-module to create a VM snapshot and
download it.

On Mon, Jan 20, 2020 at 5:04 AM Nathanaël Blanchet  wrote:

>
> Le 19/01/2020 à 18:38, Jayme a écrit :
>
> The biggest problem with these tools is that they are very inefficient.
> To work they snapshot the VM then clone the snapshot into a new VM, back it
> up then delete.  This takes a lot of space and time.
>
> vProtect and some other enterprise backup software snapshot the VM and
> stream the snapshot from the API without needing to clone or using a proxy
> VM.
>
> At the same time, this workflow is the one recommended by the ovirt team (
> https://www.ovirt.org/develop/release-management/features/storage/backup-restore-api-integration.html).
> If it is no effecient enough, ovirt team should update the process and
> advice users of a better practice for vm backup in current/future ovirt
> 4.3/4.4.
>
> The new version of vProtect even bypasses the API (because it's slow) and
> now supports streaming over SSH directly from the host.  This is the ideal
> solution for oVirt VM backups imo, but I don't know if any free tool exists
> that can offer the same functionality.
>
> On Sun, Jan 19, 2020 at 1:03 PM Torsten Stolpmann <
> torsten.stolpm...@verit.de> wrote:
>
>> I am still using https://github.com/wefixit-AT/oVirtBackup but since
>> support for the v3 API will be removed with oVirt 4.4 it will stop
>> working with this release. For this reason I can no longer recommend it
>> but it served me well the past few years.
>>
>> There is also https://github.com/jb-alvarado/ovirt-vm-backup which has
>> similar functionality but I have yet no first-hand experience with this.
>>
>> Hope this helps.
>>
>> Torsten
>>
>> On 19.01.2020 10:05, Nazan CENGİZ wrote:
>> > Hi all,
>> >
>> >
>> > I want to back up Ovirt for free. Is there a script, project or tool
>> > that you can recommend for this?
>> >
>> >
>> > Is there a project that you can test, both backup and restore process
>> > can work properly?
>> >
>> >
>> > Best Regards,
>> >
>> > Nazan.
>> >
>> >
>> >
>> > 
>> > **Nazan CENGİZ
>> > AR-GE MÜHENDİSİ
>> > Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
>> >   +90 312 219 57 87   +90 312 219 57 97
>> >
>> > YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz
>> > Koşul ve Şartlar dokümanına tabidir.
>> > 
>> > LEGAL NOTICE: This e-mail is subject to the Terms and Conditions
>> > document which can be accessed with this link.
>> > 
>> >   Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please
>> > consider the environment before printing this email
>> >
>> >
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56O76VB5WO3MV2URL4OH3KNZMQRSKU4/
>> >
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LGGH7UEC3RBNELT57YF7255FYORSMGZ/
>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H6JDPEBGWJY3KDRIKV2MJSJB64ZPZ3FS/
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> SIRE
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZSR6JQTJ5MYKC7AS4CZXNW33MQG6UOPW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
yes, I ve already tried a full heal a week a go.

how do i perform a manual heal?

I only have this gfid:











Status: Connected
Number of entries: 868

I ve tried to heal it with:
[root@kvm10 ~]# gluster volume heal data split-brain latest-mtime 
gfid:c2b47c5c-89b6-49ac-bf10-1733dd8f0902
Healing gfid:c2b47c5c-89b6-49ac-bf10-1733dd8f0902 failed: File not in 
split-brain.
Volume heal failed.

(the last entry)

And if i understood it correct ther is no split-brain

[root@kvm10 ~]# gluster volume heal data info split-brain
Brick kvm10:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

Brick kvm320.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQMSKCYGKKKUZIRXSKBW2VWWPLYVEX7A/


[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Jayme
I would try running a full heal first and give it some time to see if it
clears up.  I.e. gluster volume heal  full

If that doesn't work, you could try stat on every file to trigger healing
doing something like this: find /fuse-mountpoint -iname '*' -exec stat {} \;

On Mon, Jan 20, 2020 at 12:16 PM Stefan Wolf  wrote:

> Hello to all,
>
> I ve a problem with gluster
>
> [root@kvm10 ~]# gluster volume heal data info summary
> Brick kvm10:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 868
> Number of entries in heal pending: 868
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick kvm320.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 1
> Number of entries in heal pending: 1
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick kvm360.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 867
> Number of entries in heal pending: 867
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick kvm380.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 868
> Number of entries in heal pending: 868
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> [root@kvm10 ~]# gluster volume heal data info split-brain
> Brick kvm10:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick kvm320.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick kvm360.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick kvm380.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> As I understand there is no split-brain but 868 files ar in state heal
> pending.
> I ve restarted every node.
>
> I ve also tried:
> [root@kvm10 ~]# gluster volume heal data full
> Launching heal operation to perform full self heal on volume data has been
> successful
> Use heal info commands to check status.
>
> but even after a week there is no really change ( I started with 912
> Number of entries in heal pending)
>
> can somebody tell what exactly is the problem and how can I solve it.
>
> thank you very much
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PN63LC3OBQOM7IQY763ZS5V6VZDUFPNP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXHQHUV7HA3AQI7VZFF5W22DB2STT5VJ/


[ovirt-users] Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
Hello to all,

I ve a problem with gluster

[root@kvm10 ~]# gluster volume heal data info summary
Brick kvm10:/gluster_bricks/data
Status: Connected
Total Number of entries: 868
Number of entries in heal pending: 868
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick kvm320.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 867
Number of entries in heal pending: 867
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 868
Number of entries in heal pending: 868
Number of entries in split-brain: 0
Number of entries possibly healing: 0

[root@kvm10 ~]# gluster volume heal data info split-brain
Brick kvm10:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

Brick kvm320.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0

As I understand there is no split-brain but 868 files ar in state heal pending.
I ve restarted every node.

I ve also tried:
[root@kvm10 ~]# gluster volume heal data full
Launching heal operation to perform full self heal on volume data has been 
successful
Use heal info commands to check status.

but even after a week there is no really change ( I started with 912 Number of 
entries in heal pending)

can somebody tell what exactly is the problem and how can I solve it.

thank you very much
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PN63LC3OBQOM7IQY763ZS5V6VZDUFPNP/


[ovirt-users] Re: VM migrations stalling over migration-only network

2020-01-20 Thread Milan Zamazal
Ben  writes:

> Hi Milan,
>
> Thanks for your reply. I checked the firewall, and saw that both the bond0
> interface and the VLAN interface bond0.20 had been added to the default
> zone, which I believe should provide the necessary firewall access (output
> below)
>
> I double-checked the destination host's VDSM logs and wasn't able to find
> any warning or error-level logs during the migration timeframe.
>
> I checked the migration_port_* and *_port settings in qemu.conf and
> libvirtd.conf and all lines are commented. I have not modified either file.

The commented out settings define the default port used for migrations,
so they are valid even when commented out.  I can see you have
libvirt-tls open below, not sure about the QEMU ports.  If migration
works when not using a separate migration network then it should work
with the same rules for the migration network, so I think your settings
are OK.

The fact that you don't get any better explanation than "unexpectedly
failed" and that it fails before transferring any data indicates a
possible networking error, but I can't help with that, someone with
networking knowledge should.

You can also try to enable libvirt debugging on both the sides in
/etc/libvirt/libvirtd.conf and restart libvirt (beware, those logs are
huge).  libvirt logs should report some error.

> [root@vhost2 vdsm]# firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: bond0 bond0.20 em1 em2 migration ovirtmgmt p1p1
>   sources:
>   services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-vmconsole
> snmp ssh vdsm
>   ports: 1311/tcp 22/tcp 6081/udp 5666/tcp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
>
> On Mon, Jan 20, 2020 at 6:29 AM Milan Zamazal  wrote:
>
>> Ben  writes:
>>
>> > Hi, I'm pretty stuck at the moment so I hope someone can help me.
>> >
>> > I have an oVirt 4.3 data center with two hosts. Recently, I attempted to
>> > segregate migration traffic from the the standard ovirtmgmt network,
>> where
>> > the VM traffic and all other traffic resides.
>> >
>> > I set up the VLAN on my router and switch, and created LACP bonds on both
>> > hosts, tagging them with the VLAN ID. I confirmed the routes work fine,
>> and
>> > traffic speeds are as expected. MTU is set to 9000.
>> >
>> > After configuring the migration network in the cluster and dragging and
>> > dropping it onto the bonds on each host, VMs fail to migrate.
>> >
>> > oVirt is not reporting any issues with the network interfaces or sync
>> with
>> > the hosts. However, when I attempt to live-migrate a VM, progress gets to
>> > 1% and stalls. The transfer rate is 0Mbps, and the operation eventually
>> > fails.
>> >
>> > I have not been able to identify anything useful in the VDSM logs on the
>> > source or destination hosts, or in the engine logs. It repeats the below
>> > WARNING and INFO logs for the duration of the process, then logs the last
>> > entries when it fails. I can provide more logs if it would help. I'm not
>> > even sure where to start -- since I am a novice at networking, at best,
>> my
>> > suspicion the entire time was that something is misconfigured in my
>> > network. However, the routes are good, speed tests are fine, and I can't
>> > find anything else wrong with the connections. It's not impacting any
>> other
>> > traffic over the bond interfaces.
>> >
>> > Are there other requirements that must be met for VMs to migrate over a
>> > separate interface/network?
>>
>> Hi, did you check your firewall settings?  Are the required ports open?
>> See migration_port_* options in /etc/libvirt/qemu.conf and *_port
>> options in /etc/libvirt/libvirtd.conf.
>>
>> Is there any error reported in the destination vdsm.log?
>>
>> Regards,
>> Milan
>>
>> > 2020-01-12 03:18:28,245-0500 WARN  (migmon/a24fd7e3) [virt.vm]
>> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling:
>> remaining
>> > (4191MiB) > lowmark (4191MiB). (migration:854)
>> > 2020-01-12 03:18:28,245-0500 INFO  (migmon/a24fd7e3) [virt.vm]
>> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration Progress: 930.341
>> > seconds elapsed, 1% of data processed, total data: 4192MB, processed
>> data:
>> > 0MB, remaining data: 4191MB, transfer speed 0MBps, zero pages: 149MB,
>> > compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:881)
>> > 2020-01-12 03:18:31,386-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
>> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') operation failed: migration
>> > out job: unexpectedly failed (migration:282)
>> > 2020-01-12 03:18:32,695-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
>> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Failed to migrate
>> > (migration:450)
>> >   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
>> 431,
>> > in _regular_run
>> > time.time(), migrationParams, machineParams
>> >   File 

[ovirt-users] Re: oVirt failing to make a template of Centos 7 VM using seal

2020-01-20 Thread Strahil Nikolov
On January 20, 2020 10:48:08 AM GMT+02:00, damien.alt...@gmail.com wrote:
>Hi there,
>
> 
>
>My oVirt engine is failing to create a template from a Centos 7 VM, the
>/var/log/vdsm/vdsm.log is as follows:
>
> 
>
>)
>
>2020-01-20 19:39:07,332+1100 ERROR (virt/875f8036) [root] Job
>u'875f8036-e28a-4741-b3a3-046cc711d252' failed (jobs:221)
>
>Traceback (most recent call last):
>
> File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run
>
>self._run()
>
>File "/usr/lib/python2.7/site-packages/vdsm/virt/jobs/seal.py", line
>74,
>in _run
>
>virtsysprep.sysprep(vol_paths)
>
>File "/usr/lib/python2.7/site-packages/vdsm/virtsysprep.py", line 39,
>in
>sysprep
>
>commands.run(cmd)
>
>File "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line
>110,
>in run
>
>raise cmdutils.Error(args, p.returncode, out, err)
>
>Error: Command ['/usr/bin/virt-sysprep', '-a',
>u'/rhev/data-center/mnt/x.x.net:_storage_host__storage/eb8ba7f9-27d5
>-44c0-a744-9027be39a756/images/3ac44e69-ae82-4d79-8b58-0f3ef4cf60db/f4631212
>-4b1e-4c65-b19b-47215e9aca55'] failed with rc=1 out='[   0.0] Examining
>the
>guest ...\n' err="libvirt: XML-RPC error : Cannot create user runtime
>directory '//.cache/libvirt': Permission denied\nvirt-sysprep: error:
>libguestfs error: could not connect to libvirt (URI =
>\nqemu:///session):
>Cannot create user runtime directory '//.cache/libvirt': \nPermission
>denied
>[code=38 int1=13]\n\nIf reporting bugs, run virt-sysprep with debugging
>enabled and include the \ncomplete output:\n\n  virt-sysprep -v -x
>[...]\n"
>
> 
>
>The template saves fine if I do not have 'Seal Template' selected.
>
> 
>
> 

Check the folder's permissions and the groups of the vdsm user.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UTATXO2RB5DP5H2S2YPMJDDLWXIKOSOK/


[ovirt-users] Re: VM migrations stalling over migration-only network

2020-01-20 Thread Milan Zamazal
Ben  writes:

> Hi, I'm pretty stuck at the moment so I hope someone can help me.
>
> I have an oVirt 4.3 data center with two hosts. Recently, I attempted to
> segregate migration traffic from the the standard ovirtmgmt network, where
> the VM traffic and all other traffic resides.
>
> I set up the VLAN on my router and switch, and created LACP bonds on both
> hosts, tagging them with the VLAN ID. I confirmed the routes work fine, and
> traffic speeds are as expected. MTU is set to 9000.
>
> After configuring the migration network in the cluster and dragging and
> dropping it onto the bonds on each host, VMs fail to migrate.
>
> oVirt is not reporting any issues with the network interfaces or sync with
> the hosts. However, when I attempt to live-migrate a VM, progress gets to
> 1% and stalls. The transfer rate is 0Mbps, and the operation eventually
> fails.
>
> I have not been able to identify anything useful in the VDSM logs on the
> source or destination hosts, or in the engine logs. It repeats the below
> WARNING and INFO logs for the duration of the process, then logs the last
> entries when it fails. I can provide more logs if it would help. I'm not
> even sure where to start -- since I am a novice at networking, at best, my
> suspicion the entire time was that something is misconfigured in my
> network. However, the routes are good, speed tests are fine, and I can't
> find anything else wrong with the connections. It's not impacting any other
> traffic over the bond interfaces.
>
> Are there other requirements that must be met for VMs to migrate over a
> separate interface/network?

Hi, did you check your firewall settings?  Are the required ports open?
See migration_port_* options in /etc/libvirt/qemu.conf and *_port
options in /etc/libvirt/libvirtd.conf.

Is there any error reported in the destination vdsm.log?

Regards,
Milan

> 2020-01-12 03:18:28,245-0500 WARN  (migmon/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling: remaining
> (4191MiB) > lowmark (4191MiB). (migration:854)
> 2020-01-12 03:18:28,245-0500 INFO  (migmon/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration Progress: 930.341
> seconds elapsed, 1% of data processed, total data: 4192MB, processed data:
> 0MB, remaining data: 4191MB, transfer speed 0MBps, zero pages: 149MB,
> compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:881)
> 2020-01-12 03:18:31,386-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') operation failed: migration
> out job: unexpectedly failed (migration:282)
> 2020-01-12 03:18:32,695-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Failed to migrate
> (migration:450)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 431,
> in _regular_run
> time.time(), migrationParams, machineParams
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 505,
> in _startUnderlyingMigration
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 591,
> in _perform_with_conv_schedule
> self._perform_migration(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 525,
> in _perform_migration
> self._migration_flags)
> libvirtError: operation failed: migration out job: unexpectedly failed
> 2020-01-12 03:18:40,880-0500 INFO  (jsonrpc/6) [api.virt] FINISH
> getMigrationStatus return={'status': {'message': 'Done', 'code': 0},
> 'migrationStats': {'status': {'message': 'Fatal error during migration',
> 'code': 12}, 'progress': 1L}} from=:::10.0.0.20,41462,
> vmId=a24fd7e3-161c-451e-8880-b3e7e1f7d86f (api:54)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PB3TQTFXWKAMNQBNH2OMH5J7R44TMZQF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WLLBLT632VYHKONHKL2W7V6VIKAPTLQF/


[ovirt-users] oVirt failing to make a template of Centos 7 VM using seal

2020-01-20 Thread damien.altman
Hi there,

 

My oVirt engine is failing to create a template from a Centos 7 VM, the
/var/log/vdsm/vdsm.log is as follows:

 

)

2020-01-20 19:39:07,332+1100 ERROR (virt/875f8036) [root] Job
u'875f8036-e28a-4741-b3a3-046cc711d252' failed (jobs:221)

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run

self._run()

  File "/usr/lib/python2.7/site-packages/vdsm/virt/jobs/seal.py", line 74,
in _run

virtsysprep.sysprep(vol_paths)

  File "/usr/lib/python2.7/site-packages/vdsm/virtsysprep.py", line 39, in
sysprep

commands.run(cmd)

  File "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line 110,
in run

raise cmdutils.Error(args, p.returncode, out, err)

Error: Command ['/usr/bin/virt-sysprep', '-a',
u'/rhev/data-center/mnt/x.x.net:_storage_host__storage/eb8ba7f9-27d5
-44c0-a744-9027be39a756/images/3ac44e69-ae82-4d79-8b58-0f3ef4cf60db/f4631212
-4b1e-4c65-b19b-47215e9aca55'] failed with rc=1 out='[   0.0] Examining the
guest ...\n' err="libvirt: XML-RPC error : Cannot create user runtime
directory '//.cache/libvirt': Permission denied\nvirt-sysprep: error:
libguestfs error: could not connect to libvirt (URI = \nqemu:///session):
Cannot create user runtime directory '//.cache/libvirt': \nPermission denied
[code=38 int1=13]\n\nIf reporting bugs, run virt-sysprep with debugging
enabled and include the \ncomplete output:\n\n  virt-sysprep -v -x [...]\n"

 

The template saves fine if I do not have 'Seal Template' selected.

 

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OWPDGDFBGSHOOL5P56YEASRO6OAO42T/


[ovirt-users] Re: Ovirt backup

2020-01-20 Thread Nathanaël Blanchet


Le 19/01/2020 à 18:38, Jayme a écrit :
The biggest problem with these tools is that they are very 
inefficient.  To work they snapshot the VM then clone the snapshot 
into a new VM, back it up then delete.  This takes a lot of space and 
time.


vProtect and some other enterprise backup software snapshot the VM and 
stream the snapshot from the API without needing to clone or using a 
proxy VM.
At the same time, this workflow is the one recommended by the ovirt team 
(https://www.ovirt.org/develop/release-management/features/storage/backup-restore-api-integration.html). 
If it is no effecient enough, ovirt team should update the process and 
advice users of a better practice for vm backup in current/future ovirt 
4.3/4.4.
The new version of vProtect even bypasses the API (because it's slow) 
and now supports streaming over SSH directly from the host.  This is 
the ideal solution for oVirt VM backups imo, but I don't know if any 
free tool exists that can offer the same functionality.


On Sun, Jan 19, 2020 at 1:03 PM Torsten Stolpmann 
mailto:torsten.stolpm...@verit.de>> wrote:


I am still using https://github.com/wefixit-AT/oVirtBackup but since
support for the v3 API will be removed with oVirt 4.4 it will stop
working with this release. For this reason I can no longer
recommend it
but it served me well the past few years.

There is also https://github.com/jb-alvarado/ovirt-vm-backup which
has
similar functionality but I have yet no first-hand experience with
this.

Hope this helps.

Torsten

On 19.01.2020 10:05, Nazan CENGİZ wrote:
> Hi all,
>
>
> I want to back up Ovirt for free. Is there a script, project or
tool
> that you can recommend for this?
>
>
> Is there a project that you can test, both backup and restore
process
> can work properly?
>
>
> Best Regards,
>
> Nazan.
>
>
>
> 
> **Nazan CENGİZ
> AR-GE MÜHENDİSİ
> Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
>       +90 312 219 57 87               +90 312 219 57 97
>
> YASAL UYARI: Bu elektronik posta işbu linki kullanarak
ulaşabileceğiniz
> Koşul ve Şartlar dokümanına tabidir.
> 
> LEGAL NOTICE: This e-mail is subject to the Terms and Conditions
> document which can be accessed with this link.
> 
>       Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız /
Please
> consider the environment before printing this email
>
>
>
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org

> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56O76VB5WO3MV2URL4OH3KNZMQRSKU4/
>
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LGGH7UEC3RBNELT57YF7255FYORSMGZ/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H6JDPEBGWJY3KDRIKV2MJSJB64ZPZ3FS/


--
Nathanaël Blanchet

Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZSR6JQTJ5MYKC7AS4CZXNW33MQG6UOPW/