[ovirt-users] Re: Migration not working
Hi, It seems to be fixed now. I updated all the nodes to the last version as well as hosted engine. I'll see to look at VDSM logs if it happens again. Regards El 16/8/20 a las 02:44, Strahil Nikolov escribió: What is the output of 'hosted-engine --vm-status' on all nodes that are supposed to host the HostedEngine ? Usually issues there can be debugged , when you check the ovirt-ha-broker and ovirt-ha-agent logs (in /var/log on the affected Host). Best Regards, Strahil Nikolov На 15 август 2020 г. 9:29:20 GMT+03:00, Alex K написа: On Fri, Aug 14, 2020, 16:16 Juan Pablo Lorier wrote: Hi, I'm having issues with migration to some hosts. I have a 4 node cluster and I tried updating to see if it fixes but it's still failing. The reason is nos clear in the engine log. 2020-08-14 09:45:32,322-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-429) [364e] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT ED(67), Migration initiated by system (VM: medialist2-videoteca, Source: virt2.tnu.com.uy, Destination: virt3.tnu.com.uy, Reason: ). 2020-08-14 09:45:32,336-03 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429) [535a8bdb] Lock Acquired to object 'EngineLock:{exclusiveLocks='[aac7468 5-c969-4761-923f-043400176edf=VM]', sharedLocks=''}' 2020-08-14 09:45:32,355-03 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429) [535a8bdb] Running command: MigrateVmToServerCommand internal: true. Ent ities affected : ID: aac74685-c969-4761-923f-043400176edf Type: VMAction group MIGRATE_VM with role type USER 2020-08-14 09:45:32,383-03 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429) [535a8bdb] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId= '634f3f64-8945-470c-b31c-b8d4c73109e6', vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy', dstVdsId='f9e441a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4. tnu.com.uy:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth= '125', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action= {name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, pa rams=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='172.16.100.45'}), log id: 1db80717 2020-08-14 09:45:32,385-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-429) [535a8bdb] START, MigrateBrokerVDSCommand(HostName = virt 2.tnu.com.uy, MigrateVDSCommandParameters:{hostId='634f3f64-8945-470c-b31c-b8d4c73109e6', vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy', dstVdsId='f9e4 41a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.tnu.com.uy:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompre ssed='false', consoleAddress='null', maxBandwidth='125', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDow ntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, para ms=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='172.1 6.100.45'}), log id: 4085b503 2020-08-14 09:45:32,433-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-429) [535a8bdb] FINISH, MigrateBrokerVDSCommand, return: , log id: 4085b503 2020-08-14 09:45:32,437-03 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429) [535a8bdb] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 1db8 0717 2020-08-14 09:45:32,446-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-429) [535a8bdb] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT ED(67), Migration initiated by system (VM: reverse_proxy, Source: virt2.tnu.com.uy, Destination: virt4.tnu.com.uy, Reason: ). 2020-08-14 09:45:32,462-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDi
[ovirt-users] Migration not working
Hi, I'm having issues with migration to some hosts. I have a 4 node cluster and I tried updating to see if it fixes but it's still failing. The reason is nos clear in the engine log. 2020-08-14 09:45:32,322-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-429) [364e] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT ED(67), Migration initiated by system (VM: medialist2-videoteca, Source: virt2.tnu.com.uy, Destination: virt3.tnu.com.uy, Reason: ). 2020-08-14 09:45:32,336-03 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429) [535a8bdb] Lock Acquired to object 'EngineLock:{exclusiveLocks='[aac7468 5-c969-4761-923f-043400176edf=VM]', sharedLocks=''}' 2020-08-14 09:45:32,355-03 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429) [535a8bdb] Running command: MigrateVmToServerCommand internal: true. Ent ities affected : ID: aac74685-c969-4761-923f-043400176edf Type: VMAction group MIGRATE_VM with role type USER 2020-08-14 09:45:32,383-03 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429) [535a8bdb] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId= '634f3f64-8945-470c-b31c-b8d4c73109e6', vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy', dstVdsId='f9e441a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4. tnu.com.uy:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth= '125', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action= {name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, pa rams=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='172.16.100.45'}), log id: 1db80717 2020-08-14 09:45:32,385-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-429) [535a8bdb] START, MigrateBrokerVDSCommand(HostName = virt 2.tnu.com.uy, MigrateVDSCommandParameters:{hostId='634f3f64-8945-470c-b31c-b8d4c73109e6', vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy', dstVdsId='f9e4 41a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.tnu.com.uy:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompre ssed='false', consoleAddress='null', maxBandwidth='125', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDow ntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, para ms=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='172.1 6.100.45'}), log id: 4085b503 2020-08-14 09:45:32,433-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-429) [535a8bdb] FINISH, MigrateBrokerVDSCommand, return: , log id: 4085b503 2020-08-14 09:45:32,437-03 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429) [535a8bdb] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 1db8 0717 2020-08-14 09:45:32,446-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-429) [535a8bdb] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT ED(67), Migration initiated by system (VM: reverse_proxy, Source: virt2.tnu.com.uy, Destination: virt4.tnu.com.uy, Reason: ). 2020-08-14 09:45:32,462-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-429) [535a8bdb] EVENT_ID: USER_VDS_MAINTENANCE_WITHOUT_REASON(620), Host virt2.tnu.com.uy was switched to Maintenance mode by jplor...@tnu.com.uy. 2020-08-14 09:45:32,681-03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-14) [7667d149-886e-406b-9cd6-9b7472fa9944] Command 'MaintenanceNumberOfVdss' id: 'db97a747-115b-4f01-aa3d-6277080e6a44' child commands '[]' executions were completed, status 'SUCCEEDED' 2020-08-14 09:45:33,024-03 INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engineScheduled-Thread-62) [] Received first domain report for host virt2.tnu.com.uy 2020-08-14 09:45:33,686-03 INFO [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-88) [7667d149-886e-406b-9cd6-9b7472fa9944] Ending command 'org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand' successfully. 2020-08-14 09:45:35,137-03 INFO [org.ovirt.e
[ovirt-users] Re: OVA import fails always
So far nor virtualbox ova in any format nor EXi work :-( Thanks El 28/2/20 a las 12:25, Edward Berger escribió: I haven't tried many but for one I just untarred the thing to get the disk image file and created a new VM with that. Sometimes the ova file is compressed/assembled in some way that might not be compatible. On Fri, Feb 28, 2020 at 1:12 AM Jayme <mailto:jay...@gmail.com>> wrote: If the problem is with the upload process specifically it’s likely that you do not have the ovirt engine certificate installed in your browser. On Thu, Feb 27, 2020 at 11:34 PM Juan Pablo Lorier mailto:jplor...@gmail.com>> wrote: Hi, I'm running 4.3.8.2-1.el7 (just updated engine to see if it helps) and I haven't been able to import vms in OVA format, I've tried many appliances downloaded from the web but couldn't get them to work. Any hints? Regards ___ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-le...@ovirt.org <mailto:users-le...@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRCE36GYQIOCXYR6K3KWUJA6R4ODWU56/ ___ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-le...@ovirt.org <mailto:users-le...@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VWU7EM2ZFX6STEJU67WQTIRJZWBBVWZG/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLTS63CJFGUA6T5XAKHWEKWEOWUCMBCV/
[ovirt-users] Re: OVA import fails always
This is not a disk upload, which works fine, this is an import. It loads locally from one hypervisor, and the same problem happens on the other nodes. I can load the ova, select the vm to import and see all the params, then in the import process, it fails. Regards El 28/2/20 a las 03:11, Jayme escribió: If the problem is with the upload process specifically it’s likely that you do not have the ovirt engine certificate installed in your browser. On Thu, Feb 27, 2020 at 11:34 PM Juan Pablo Lorier <mailto:jplor...@gmail.com>> wrote: Hi, I'm running 4.3.8.2-1.el7 (just updated engine to see if it helps) and I haven't been able to import vms in OVA format, I've tried many appliances downloaded from the web but couldn't get them to work. Any hints? Regards ___ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-le...@ovirt.org <mailto:users-le...@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRCE36GYQIOCXYR6K3KWUJA6R4ODWU56/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PZE5AX7ZFQZKGIV42UKOZ2OXL2DRV2CP/
[ovirt-users] OVA import fails always
Hi, I'm running 4.3.8.2-1.el7 (just updated engine to see if it helps) and I haven't been able to import vms in OVA format, I've tried many appliances downloaded from the web but couldn't get them to work. Any hints? Regards ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRCE36GYQIOCXYR6K3KWUJA6R4ODWU56/
[ovirt-users] Disk move succeed but didn't move content
Hi, I've a fresh new install of ovirt 4.3 and tried to import an gluster vmstore. I managed to import via NFS the former data domain. The problem is that when I moved the disks of the vms to the new ISCSI data domain, I got a warning that sparse disk type will be converted to qcow2 disks, and after accepting, the disks were moved with no error. The problem is that the disks now figure as <1Gb size instead of the original size and thus, the vms fail to start. Is there any way to recover those disks? I have no backup of the vms :-( Regards ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YKK2HIGPFJUZBS5KQHIIWCP5OGC3ZYVY/
[ovirt-users] Re: External networks issue after upgrading to 4.3.6
Hi everyone, Im having the same issue after upgrading from 4.3.4 to 4.3.6 . tried to do what you mention with no luck. here are the outputs: from one host: 2019-11-13T00:49:18.233Z|3|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2019-11-13T00:49:18.235Z|4|reconnect|INFO|ssl:192.168.57.239:6642: connecting... 2019-11-13T00:49:18.248Z|5|reconnect|INFO|ssl:192.168.57.239:6642: connected 2019-11-13T00:49:18.253Z|6|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2019-11-13T00:49:18.253Z|7|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2019-11-13T00:49:18.253Z|8|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2019-11-13T00:49:18.253Z|9|pinctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2019-11-13T00:49:18.253Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2019-11-13T00:49:18.255Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2019-11-13T00:49:18.264Z|00012|dpif_netlink|INFO|The kernel module does not support meters. after running the playbook with the params: localhost : ok=1changed=0unreachable=0failed=0 skipped=2rescued=0ignored=0 multivac00.phy.nyx : ok=6changed=1unreachable=0failed=0 skipped=3rescued=0ignored=0 multivac01.phy.nyx : ok=6changed=1unreachable=0failed=0 skipped=3rescued=0ignored=0 from one host: _uuid : ddb5474b-f5be-4f06-ac8c-812b061821ae bridges : [3d5ee016-a77c-44d6-a559-82515170be4c] cur_cfg : 1 datapath_types : [netdev, system] db_version : "7.16.1" dpdk_initialized: false dpdk_version: "DPDK 18.11.0" external_ids: {hostname="multivac01.phy.nyx", ovn-bridge-mappings="", ovn-encap-ip="192.168.57.241", ovn-encap-type=geneve, ovn-remote="ssl:192.168.57.239:6642", rundir="/var/run/openvswitch", system-id="302680af-2677-41ef-9eaf-62eb75c75748"} iface_types : [erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lisp, patch, stt, system, tap, vxlan] manager_options : [] next_cfg: 1 other_config: {} ovs_version : "2.11.0" ssl : [] statistics : {} system_type : centos system_version : "7" ovn-sbctl list chassis keeps there for pretty long time with no output. thanks for the help in advance =) JP El vie., 11 oct. 2019 a las 5:17, ada per () escribió: > Thank you for all your help! Unfortunately the log is empty! Im now > reinstalling the environment from scratch > > On Fri, Oct 11, 2019 at 9:32 AM Dominik Holler wrote: > >> >> >> On Fri, Oct 11, 2019 at 8:24 AM ada per wrote: >> >>> any input??? >>> >>> >> What is written to the ovn-controller.log now? >> I expect that there is an helpful error explaning the problem while >> connecting to 192.161.21:6642. >> >> >>> On Wed, 9 Oct 2019, 16:50 ada per, wrote: >>> ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:16:3e:32:c8:ff brd ff:ff:ff:ff:ff:ff inet 192.168.1.21/24 brd 192.168.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::216:3eff:fe32:c8ff/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 56:6f:74:a4:00:0d brd ff:ff:ff:ff:ff:ff inet6 fe80::abc9:3c51:1a61:ab7a/64 scope link noprefixroute valid_lft forever preferred_lft forever and [root@threatrealm3 ~]# ovs-vsctl list Open _uuid : 637b29cb-3874-4356-9d16-f4e0e4ddf212 bridges : [f805bd1c-3d65-4f46-a1f4-2a337bfbfe60] cur_cfg : 538 datapath_types : [netdev, system] db_version : "7.16.1" dpdk_initialized: false dpdk_version: "DPDK 18.11.0" external_ids: {hostname="threatrealm3.sec.ouc.ac.cy", ovn-bridge-mapping s="", ovn-encap-ip="192.168.1.26", ovn-encap-type=geneve, ovn-remote="ssl:192.16 8.1.21:6642", rundir="/var/run/openvswitch", system-id="d3240f0e-50ca-43e9-b7de- c1a36587f609"} iface_types : [erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lis p, patch, stt, system, tap, vxlan] manager_options : [] next_cfg: 538 other_config: {} ovs_version : "2.11.0" ssl : [
[ovirt-users] Re: Inconsisten datacenter [urgent]
Thanks. Sorry for the late replay but I had to travel. I've tried your suggestion and now I'm not able to start the engine VM. It's trying to get access for a storage (that I can't tell which is as vdsm only shows the ID and I don't have a way to know the label without the engine). I think that I might need to hack into the database of the engine or reset it and hack in the backup so I can change some values by hand. Any other ideas? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WE2HVYI5HF3TUFQYO7VA4GJMAIDUP46M/
[ovirt-users] Re: Impossible to install VM Machine
Hi Abdoul, what I don't get right is what is your setup. Ovirt can have data domains that is where you put the vm disks. That domains can be NFS share, gluster bricks, iscsi san or direct disk. When you say files sharing folders, I can't think of a data domain attached there unless it's a NFS local share. Maybe I'm missing something, but I can see that working. Do you have a data domain in storage? I tend to think you have because otherway you can't create the disks for the vms (which you add when you create the vm) Please, replay to the list too so others can see the thread. Regards On 10/7/19 16:51, Abdoul-Rahamane Issoufou Oumarou wrote: > Thank you Juan Pablo, > > Yes indeed I installed everything on one machine. Before I used > ovirt-node and ovirt but it always gave me the same problem. > > I'm going to try again and I'll make you a catch of the mistakes I get. > > Otherwise for storage, I created FILES sharing folders on the ovirt > machine it's them that I use. > > Thank you very much > > Le mer. 10 juil. 2019 à 20:24, Juan Pablo Lorier <mailto:jplor...@gmail.com>> a écrit : > > Hi, > > We need more information to try and find the problem. Did you look at > the logs in ovirt-engine? I assume by your description that you ran an > all in one install. What storage are you using? > > Regards > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZNKPEF6RHKBSSZERE5YDZNP6UV3WYRD/
[ovirt-users] Impossible to install VM Machine
Hi, We need more information to try and find the problem. Did you look at the logs in ovirt-engine? I assume by your description that you ran an all in one install. What storage are you using? Regards ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y7TSCAO2QXU2EO24TFN6VWC6F6D2VWOM/
[ovirt-users] Re: [ovirt-devel] USB on host support?
I'm having a similar problem. The passtrough needs 2 steps, the first is to attach the host device to the VM, but if you don't enable the spice usb the vm fails to start. I don't know if the USB works straight from the host device or if it depends on the spice console. In my case, a device that work in a windows 10 pc doesn't work in a windows 10 vm. Maybe we both can get help in this matter. Regards El 1/7/19 a las 13:02, users-requ...@ovirt.org escribió: Send Users mailing list submissions to users@ovirt.org To subscribe or unsubscribe via email, send a message with subject or body 'help' to users-requ...@ovirt.org You can reach the person managing the list at users-ow...@ovirt.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Users digest..." Today's Topics: 1. Re: Failed on HE-Storage migration (Strahil Nikolov) 2. Adding a custom fence agent on 4.3.4 (Matteo) 3. USB host support? (Hetz Ben Hamo) 4. Re: [ovirt-devel] USB on host support? (Hetz Ben Hamo) 5. Re: Ovirt Hyperconverged Cluster (Stefanile Raffaele) ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SKZEUBOZS5DRTYDBOVG7V4AKPUMH754M/
[ovirt-users] Re: USB Passtrough device problem
Hi, I don't get what I should look. I can get the host module and then the same in linux. In windows it uses the driver of the manufacturer. This seems to be more something that I need to debug with the ovirt devs to point out what I should trace. I don't know how to debug an USB stream. What I can do is test the audio in the host and then see if it has a difference in the passthrough. Thanks! On 29/6/19 06:46, Strahil wrote: > > You can attach the usb to an oVirt node and check the module in use there. > Then you can try patthrough the device and check the module in use on > the VM (if it's linux) or install the driver (if Windows). > > Best Regards, > Strahil Nikolov > > On Jun 28, 2019 21:11, Juan Pablo Lorier wrote: > > thanks anyway. I'm trying to figure out how to deal with this. > > Regards > > El 26/6/19 a las 16:45, Strahil escribió: > > Slow motion... This sounds like bad driver or an unoptimized one. > > Sadly I have no more ideas how to overcome this. > > Best Rregards, > Strahil Nikolov > > On Jun 25, 2019 20:07, jplor...@gmail.com > <mailto:jplor...@gmail.com> wrote: > > Hi Strahil > > Sorry for the late reply but google sent ovirt mails to spam. > I have tried and thought I get audio, it sounds like its > played at an incorrect bitrate (I recorded audio ant soundls > like slow motion audio). > Is there any other way to present the usb device than throught > console usb emulation? > Regards > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NYUI3WMBVHT2KAOCS5YP7CG6VWINETC/
[ovirt-users] Re: USB Passtrough device problem
thanks anyway. I'm trying to figure out how to deal with this. Regards El 26/6/19 a las 16:45, Strahil escribió: Slow motion... This sounds like bad driver or an unoptimized one. Sadly I have no more ideas how to overcome this. Best Rregards, Strahil Nikolov On Jun 25, 2019 20:07, jplor...@gmail.com wrote: Hi Strahil Sorry for the late reply but google sent ovirt mails to spam. I have tried and thought I get audio, it sounds like its played at an incorrect bitrate (I recorded audio ant soundls like slow motion audio). Is there any other way to present the usb device than throught console usb emulation? Regards ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GM4BHQPGKYBNCDHY4C2Q4GHYAHZSESZC/
[ovirt-users] USB Passtrough device problem
Hi, I've tried to use 2 different audio IO interfaces connected to a VM via USB passthrough and none worked. I've tried with win 10 and win server 2016 vms with different errors but none worked. I've connected the devices to a host, then attached the host device to a VM and in the VM properties I've enabled the USB support in the console properties. The devices are installed and don't show any error in the device manager within the VM, but they don't work. Should I do something else? Regards ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVFDXQGSVCYBSJWQHE7GUNKDIQZH3OED/
[ovirt-users] [Urgent] USB passthrough
Hi, I'm trying to attach an USB device to a VM but although the host detects the USB device (is shown in lsusb), the host device tab in the VM is not showing that device. It show all other but not that one. Is there something I should do to update ovrit host devices? Regards ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QICZRAQNL76LWF543DHNHGZCQN4YQCJJ/
[ovirt-users] Re: Storage IO
Hi Thomas, so you are seeing high load on your storage and you are asking 'why'? an answer with the facts you give would be: you are using your storage, so, you have storage IO. so, if you want to dive deeper: -which storage are you using, specs would be nice. -which host model are you using? -network specs? card model, etc. switch model, etc. hows your setup made? iscsi? nfs? gluster? based on the former, we might get a better idea and after this some tests could be made if needed to find if there's a bottleneck or if the environment is working as expected.. regards, 2018-06-04 14:29 GMT-03:00 Thomas Fecke : > Hey Guys, > > > > sorry i need to ask again. > > > > We got 2 Hypervisor with about 50 running VM´s and a single Storage with > 10 Gig connection. > > > > > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > avgqu-sz await r_await w_await svctm %util > > sda 3,00 694,00 1627,00 947,00 103812,00 61208,00 > 128,22 6,782,632,133,49 0,39 99,70 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > >0,000,003,70 31,370,00 64,93 > > > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > avgqu-sz await r_await w_await svctm %util > > sda 1,00 805,00 836,00 997,00 43916,00 57900,00 > 111,09 6,003,271,874,44 0,54 99,30 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > >0,000,003,54 29,960,00 66,50 > > > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > avgqu-sz await r_await w_await svctm %util > > sda 2,00 822,00 1160,00 1170,00 46700,00 52176,00 > 84,87 5,682,441,573,30 0,43 99,50 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > >0,000,005,05 31,460,00 63,50 > > > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > avgqu-sz await r_await w_await svctm %util > > sda 3,00 1248,00 2337,00 1502,00 134932,00 48536,00 > 95,58 6,591,721,532,01 0,26 99,30 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > >0,000,003,95 31,790,00 64,26 > > > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > avgqu-sz await r_await w_await svctm %util > > sda 0,00 704,00 556,00 1292,00 19908,00 72600,00 > 100,12 5,502,991,833,48 0,54 99,50 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > >0,000,003,03 28,900,00 68,07 > > > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > avgqu-sz await r_await w_await svctm %util > > sda 0,00 544,00 278,00 1095,00 7848,00 66124,00 > 107,75 5,313,871,494,47 0,72 99,10 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > >0,000,003,03 29,320,00 67,65 > > > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > avgqu-sz await r_await w_await svctm %util > > sda 0,00 464,00 229,00 1172,00 6588,00 72384,00 > 112,74 5,443,881,674,31 0,71 99,50 > > > > > > > > > > and this is our Problem. Anyone know why our Storage recive that much of > Precesses? > > > > Thanks in advance > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/community/about/community- > guidelines/ > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ > message/YENR6R4ESX3JCOS7DYA354EOPNL6WGUN/ > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2YH5SLSFSHA6BNZHSIIJUUTZLUOOMGK/
[ovirt-users] Re: strange issue: vm lost info on disk
= =new findings/ it's working now = = I cloned the faulty system in order to play with it, and the cloned vm *boots with no problem at all*. so there's clearly an issue with moving between nfs to iscsi SD's with a snapshot. I have both VM's now and if anyone is interested I can keep going on troubleshooting. if not, I would like to give a very special ***THANK YOU*** to the whole REDHAT+OVIRT team for their daily outstanding work and their excellent products. thanks for your time! JP qemu-img info --backing-chain /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/36678881-9686-48b5-b39c-16fafece5c5a/1bbf1375-7469-426a-b68d-adbc3446d51e image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/36678881-9686-48b5-b39c-16fafece5c5a/1bbf1375-7469-426a-b68d-adbc3446d51e file format: raw virtual size: 1.1T (1181116006400 bytes) disk size: 0 2018-05-14 14:58 GMT-03:00 Juan Pablo : > Im still wondering why Ovirt moved an image and left it with errors, but > reported the move as successful in the GUI. If you think it's related to > the snapshot, its really strange as it's the first time I see this odd > behavior, never got an issue like this when moving images+snaps. > > on the other side, if it moved the 700G image(and not the extra 400), why > it's unconsistant? shouldn't be old, but nevertheless in good shape? is > there anything else to try? > > > thanks in advance! > regards, > > > 2018-05-14 12:01 GMT-03:00 Juan Pablo : > >> Hi Nir, thanks for the reply, here's the output: >> >> *(BASE)* >> *[root@node02 ~]# * qemu-img info --backing-chain >> /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c440 >> 5e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d0 >> 5-970e-4643-9774-96c31796062c >> image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c440 >> 5e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d0 >> 5-970e-4643-9774-96c31796062c >> file format: raw >> virtual size: 700G (751619276800 bytes) >> disk size: 0 >> >> >> >> *(with Snapshot)[root@node02 ~]# * qemu-img info --backing-chain >> /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c440 >> 5e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/86a6fdb >> 9-b9a4-4b78-8e3d-940f83cedc5a >> image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c440 >> 5e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/86a6fdb >> 9-b9a4-4b78-8e3d-940f83cedc5a >> file format: qcow2 >> virtual size: 1.1T (1181116006400 bytes) >> disk size: 0 >> cluster_size: 65536 >> backing file: 52532d05-970e-4643-9774-96c31796062c (actual path: >> /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c440 >> 5e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d0 >> 5-970e-4643-9774-96c31796062c) >> backing file format: raw >> Format specific information: >> compat: 1.1 >> lazy refcounts: false >> refcount bits: 16 >> corrupt: false >> >> image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c440 >> 5e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d0 >> 5-970e-4643-9774-96c31796062c >> file format: raw >> virtual size: 700G (751619276800 bytes) >> disk size: 0 >> >> I really appreciate your time helping, >> regards, >> >> >> 2018-05-14 11:36 GMT-03:00 Nir Soffer : >> >>> On Mon, May 14, 2018 at 5:19 PM Juan Pablo >>> wrote: >>> >>>> ok, so Im confirming that the image is wrong somehow: >>>> with no snapshot, from inside the vm disk size is reporting 750G. >>>> with a snapshot, from inside the vm disk size is reporting 1100G. >>>> both have no partitions on it, so I guess ovirt migrated the structure >>>> of the 750G disk on a 1100 disk, any ideas to troubleshoot this and see if >>>> there's data to recover? >>>> >>> >>> Maybe you resized the disk after making a snapshot? >>> >>> If the base is raw, the size seen by the guest is the size of the image. >>> >>> The snapshot is qcow2, the size seen by the guest is the size saved in >>> the qcow2 header. >>> >>> Can you share the output of: >>> >>> qemu-img info --backing-chain /path/to/snapshot >>> >>> And: >>> >>> qemu-img info --backing-chain /path/to/base >>> >>> You can see the path in the vm xml, either in vdsm.log, or using virsh: >>> >>> virsh -r list >>&g
[ovirt-users] Re: VM interface bonding (LACP)
so you have lacp on your host, and you want lacp also on your vm... somehow doesn't sounds correct. there are several lacp modes. which one are you using on the host? 2018-05-14 16:20 GMT-03:00 Doug Ingham : > On 14 May 2018 at 15:03, Vinícius Ferrão wrote: > >> You should use better hashing algorithms for LACP. >> >> Take a look at this explanation: https://www.ibm.com/developerw >> orks/community/blogs/storageneers/entry/Enhancing_ >> IP_Network_Performance_with_LACP?lang=en >> >> In general only L2 hashing is made, you can achieve better throughput >> with L3 and multiple IPs, or with L4 (ports). >> >> Your switch should support those features too, if you’re using one. >> >> V. >> > > The problem isn't the LACP connection between the host & the switch, but > setting up LACP between the VM & the host. For reasons of stability, my 4.1 > cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my > question, is LACP on the VM possible with that, or will I have to use ALB? > > Regards, > Doug > > >> >> >> On 14 May 2018, at 15:16, Doug Ingham wrote: >> >> Hi All, >> My hosts have all of their interfaces bonded via LACP to maximise >> throughput, however the VMs are still limited to Gbit virtual interfaces. >> Is there a way to configure my VMs to take full advantage of the bonded >> physical interfaces? >> >> One way might be adding several VIFs to each VM & using ALB bonding, >> however I'd rather use LACP if possible... >> >> Cheers, >> -- >> Doug >> >> > -- > Doug > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org
[ovirt-users] Re: VM interface bonding (LACP)
it doesnt work that way... anyway can you share detailed your network config? which lacp mode are you using? have you run an iperf/test or something to see where's the problem? can you share the results? 2018-05-14 16:08 GMT-03:00 Doug Ingham : > On 14 May 2018 at 15:01, Juan Pablo wrote: > >> LACP is not intended for maximizing throughtput. >> if you are using iscsi, you should use multipathd instead. >> >> regards, >> > > Umm, maximising the total throughput for multiple concurrent connections > is most definitely one of the uses of LACP. In this case, the VM is our > main reverse proxy, and its single Gbit VIF has become a bottleneck. > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org
[ovirt-users] Re: VM interface bonding (LACP)
LACP is not intended for maximizing throughtput. if you are using iscsi, you should use multipathd instead. regards, 2018-05-14 15:16 GMT-03:00 Doug Ingham : > Hi All, > My hosts have all of their interfaces bonded via LACP to maximise > throughput, however the VMs are still limited to Gbit virtual interfaces. > Is there a way to configure my VMs to take full advantage of the bonded > physical interfaces? > > One way might be adding several VIFs to each VM & using ALB bonding, > however I'd rather use LACP if possible... > > Cheers, > -- > Doug > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org
[ovirt-users] Re: strange issue: vm lost info on disk
Im still wondering why Ovirt moved an image and left it with errors, but reported the move as successful in the GUI. If you think it's related to the snapshot, its really strange as it's the first time I see this odd behavior, never got an issue like this when moving images+snaps. on the other side, if it moved the 700G image(and not the extra 400), why it's unconsistant? shouldn't be old, but nevertheless in good shape? is there anything else to try? thanks in advance! regards, 2018-05-14 12:01 GMT-03:00 Juan Pablo : > Hi Nir, thanks for the reply, here's the output: > > *(BASE)* > *[root@node02 ~]# * qemu-img info --backing-chain > /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa- > 99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/ > 52532d05-970e-4643-9774-96c31796062c > image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa- > 99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/ > 52532d05-970e-4643-9774-96c31796062c > file format: raw > virtual size: 700G (751619276800 bytes) > disk size: 0 > > > > *(with Snapshot)[root@node02 ~]# * qemu-img info --backing-chain > /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa- > 99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/ > 86a6fdb9-b9a4-4b78-8e3d-940f83cedc5a > image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa- > 99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/ > 86a6fdb9-b9a4-4b78-8e3d-940f83cedc5a > file format: qcow2 > virtual size: 1.1T (1181116006400 bytes) > disk size: 0 > cluster_size: 65536 > backing file: 52532d05-970e-4643-9774-96c31796062c (actual path: > /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa- > 99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/ > 52532d05-970e-4643-9774-96c31796062c) > backing file format: raw > Format specific information: > compat: 1.1 > lazy refcounts: false > refcount bits: 16 > corrupt: false > > image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa- > 99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/ > 52532d05-970e-4643-9774-96c31796062c > file format: raw > virtual size: 700G (751619276800 bytes) > disk size: 0 > > I really appreciate your time helping, > regards, > > > 2018-05-14 11:36 GMT-03:00 Nir Soffer : > >> On Mon, May 14, 2018 at 5:19 PM Juan Pablo >> wrote: >> >>> ok, so Im confirming that the image is wrong somehow: >>> with no snapshot, from inside the vm disk size is reporting 750G. >>> with a snapshot, from inside the vm disk size is reporting 1100G. >>> both have no partitions on it, so I guess ovirt migrated the structure >>> of the 750G disk on a 1100 disk, any ideas to troubleshoot this and see if >>> there's data to recover? >>> >> >> Maybe you resized the disk after making a snapshot? >> >> If the base is raw, the size seen by the guest is the size of the image. >> >> The snapshot is qcow2, the size seen by the guest is the size saved in >> the qcow2 header. >> >> Can you share the output of: >> >> qemu-img info --backing-chain /path/to/snapshot >> >> And: >> >> qemu-img info --backing-chain /path/to/base >> >> You can see the path in the vm xml, either in vdsm.log, or using virsh: >> >> virsh -r list >> virtsh -r dumpxml vm-id >> >> Nir >> >> >>> >>> regards, >>> >>> >>> 2018-05-13 15:25 GMT-03:00 Juan Pablo : >>> >>>> 2 clues: >>>> -the original size of the disk was 750G and was extended a month ago to >>>> 1100G. The System rebooted fine several times, and took the new size with >>>> no problems. >>>> >>>> -I run fdisk from a centos 7 rescue cd and '/dev/vda' reported 750G. >>>> then, I took a snapshot of the disk to play with recovery tools and now >>>> fdisk reports 1100G... ¬¬ >>>> >>>> so my guess is on the extend and later migration to a different storage >>>> domain caused the issue. >>>> Im currently running testdisk to see if theres any partition to >>>> recover. >>>> >>>> regards, >>>> >>>> 2018-05-13 12:31 GMT-03:00 Juan Pablo : >>>> >>>>> I removed the auto-snapshot and still no lucky. no bootable disk >>>>> found. =( >>>>> ideas? >>>>> >>>>> >>>>> 2018-05-13 12:26 GMT-03:00 Juan Pablo : >>>>> >>>>>> benny, thanks for your reply: >>
[ovirt-users] Re: strange issue: vm lost info on disk
Hi Nir, thanks for the reply, here's the output: *(BASE)* *[root@node02 ~]# * qemu-img info --backing-chain /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d05-970e-4643-9774-96c31796062c image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d05-970e-4643-9774-96c31796062c file format: raw virtual size: 700G (751619276800 bytes) disk size: 0 *(with Snapshot)[root@node02 ~]# * qemu-img info --backing-chain /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/86a6fdb9-b9a4-4b78-8e3d-940f83cedc5a image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/86a6fdb9-b9a4-4b78-8e3d-940f83cedc5a file format: qcow2 virtual size: 1.1T (1181116006400 bytes) disk size: 0 cluster_size: 65536 backing file: 52532d05-970e-4643-9774-96c31796062c (actual path: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d05-970e-4643-9774-96c31796062c) backing file format: raw Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false image: /rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d05-970e-4643-9774-96c31796062c file format: raw virtual size: 700G (751619276800 bytes) disk size: 0 I really appreciate your time helping, regards, 2018-05-14 11:36 GMT-03:00 Nir Soffer : > On Mon, May 14, 2018 at 5:19 PM Juan Pablo > wrote: > >> ok, so Im confirming that the image is wrong somehow: >> with no snapshot, from inside the vm disk size is reporting 750G. >> with a snapshot, from inside the vm disk size is reporting 1100G. >> both have no partitions on it, so I guess ovirt migrated the structure of >> the 750G disk on a 1100 disk, any ideas to troubleshoot this and see if >> there's data to recover? >> > > Maybe you resized the disk after making a snapshot? > > If the base is raw, the size seen by the guest is the size of the image. > > The snapshot is qcow2, the size seen by the guest is the size saved in the > qcow2 header. > > Can you share the output of: > > qemu-img info --backing-chain /path/to/snapshot > > And: > > qemu-img info --backing-chain /path/to/base > > You can see the path in the vm xml, either in vdsm.log, or using virsh: > > virsh -r list > virtsh -r dumpxml vm-id > > Nir > > >> >> regards, >> >> >> 2018-05-13 15:25 GMT-03:00 Juan Pablo : >> >>> 2 clues: >>> -the original size of the disk was 750G and was extended a month ago to >>> 1100G. The System rebooted fine several times, and took the new size with >>> no problems. >>> >>> -I run fdisk from a centos 7 rescue cd and '/dev/vda' reported 750G. >>> then, I took a snapshot of the disk to play with recovery tools and now >>> fdisk reports 1100G... ¬¬ >>> >>> so my guess is on the extend and later migration to a different storage >>> domain caused the issue. >>> Im currently running testdisk to see if theres any partition to recover. >>> >>> regards, >>> >>> 2018-05-13 12:31 GMT-03:00 Juan Pablo : >>> >>>> I removed the auto-snapshot and still no lucky. no bootable disk found. >>>> =( >>>> ideas? >>>> >>>> >>>> 2018-05-13 12:26 GMT-03:00 Juan Pablo : >>>> >>>>> benny, thanks for your reply: >>>>> ok, so the steps are : removing the snapshot on the first place. then >>>>> what do you suggest? >>>>> >>>>> >>>>> 2018-05-12 15:23 GMT-03:00 Nir Soffer : >>>>> >>>>>> On Sat, 12 May 2018, 11:32 Benny Zlotnik, >>>>>> wrote: >>>>>> >>>>>>> Using the auto-generated snapshot is generally a bad idea as it's >>>>>>> inconsistent, >>>>>>> >>>>>> >>>>>> What do you mean by inconsistant? >>>>>> >>>>>> >>>>>> you should remove it before moving further >>>>>>> >>>>>>> On Fri, May 11, 2018 at 7:25 PM, Juan Pablo < >>>>>>> pablo.localh...@gmail.com> wrote: >>>>>>> >>>>>>>> I rebooted it with no luck, them I used the auto-gen snapshot , >>>>>>>> same luck. >>>>
[ovirt-users] Re: strange issue: vm lost info on disk
ok, so Im confirming that the image is wrong somehow: with no snapshot, from inside the vm disk size is reporting 750G. with a snapshot, from inside the vm disk size is reporting 1100G. both have no partitions on it, so I guess ovirt migrated the structure of the 750G disk on a 1100 disk, any ideas to troubleshoot this and see if there's data to recover? regards, 2018-05-13 15:25 GMT-03:00 Juan Pablo : > 2 clues: > -the original size of the disk was 750G and was extended a month ago to > 1100G. The System rebooted fine several times, and took the new size with > no problems. > > -I run fdisk from a centos 7 rescue cd and '/dev/vda' reported 750G. then, > I took a snapshot of the disk to play with recovery tools and now fdisk > reports 1100G... ¬¬ > > so my guess is on the extend and later migration to a different storage > domain caused the issue. > Im currently running testdisk to see if theres any partition to recover. > > regards, > > 2018-05-13 12:31 GMT-03:00 Juan Pablo : > >> I removed the auto-snapshot and still no lucky. no bootable disk found. =( >> ideas? >> >> >> 2018-05-13 12:26 GMT-03:00 Juan Pablo : >> >>> benny, thanks for your reply: >>> ok, so the steps are : removing the snapshot on the first place. then >>> what do you suggest? >>> >>> >>> 2018-05-12 15:23 GMT-03:00 Nir Soffer : >>> >>>> On Sat, 12 May 2018, 11:32 Benny Zlotnik, wrote: >>>> >>>>> Using the auto-generated snapshot is generally a bad idea as it's >>>>> inconsistent, >>>>> >>>> >>>> What do you mean by inconsistant? >>>> >>>> >>>> you should remove it before moving further >>>>> >>>>> On Fri, May 11, 2018 at 7:25 PM, Juan Pablo >>>> > wrote: >>>>> >>>>>> I rebooted it with no luck, them I used the auto-gen snapshot , same >>>>>> luck. >>>>>> attaching the logs in gdrive >>>>>> >>>>>> thanks in advance >>>>>> >>>>>> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik : >>>>>> >>>>>>> I see here a failed attempt: >>>>>>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb >>>>>>> roker.auditloghandling.AuditLogDirector] >>>>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-67) >>>>>>> [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID: >>>>>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz >>>>>>> have failed to move disk mail02-int_Disk1 to domain 2penLA. >>>>>>> >>>>>>> Then another: >>>>>>> 2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.dbb >>>>>>> roker.auditloghandling.AuditLogDirector] >>>>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] EVENT_ID: >>>>>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz >>>>>>> have failed to move disk mail02-int_Disk1 to domain 2penLA. >>>>>>> >>>>>>> Here I see a successful attempt: >>>>>>> 2018-05-09 21:58:42,628-03 INFO [org.ovirt.engine.core.dal.dbb >>>>>>> roker.auditloghandling.AuditLogDirector] (default task-50) >>>>>>> [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: >>>>>>> USER_MOVED_DISK(2,008), User admin@internal-authz moving disk >>>>>>> mail02-int_Disk1 to domain 2penLA. >>>>>>> >>>>>>> >>>>>>> Then, in the last attempt I see the attempt was successful but live >>>>>>> merge failed: >>>>>>> 2018-05-11 03:37:59,509-03 ERROR >>>>>>> [org.ovirt.engine.core.bll.MergeStatusCommand] >>>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) >>>>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still >>>>>>> in volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f, >>>>>>> 52532d05-970e-4643-9774-96c31796062c] >>>>>>> 2018-05-11 03:38:01,495-03 INFO [org.ovirt.engine.core.bll.Ser >>>>>>> ialChildCommandsExecutionCallback] >>>>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk'
[ovirt-users] Re: strange issue: vm lost info on disk
2 clues: -the original size of the disk was 750G and was extended a month ago to 1100G. The System rebooted fine several times, and took the new size with no problems. -I run fdisk from a centos 7 rescue cd and '/dev/vda' reported 750G. then, I took a snapshot of the disk to play with recovery tools and now fdisk reports 1100G... ¬¬ so my guess is on the extend and later migration to a different storage domain caused the issue. Im currently running testdisk to see if theres any partition to recover. regards, 2018-05-13 12:31 GMT-03:00 Juan Pablo : > I removed the auto-snapshot and still no lucky. no bootable disk found. =( > ideas? > > > 2018-05-13 12:26 GMT-03:00 Juan Pablo : > >> benny, thanks for your reply: >> ok, so the steps are : removing the snapshot on the first place. then >> what do you suggest? >> >> >> 2018-05-12 15:23 GMT-03:00 Nir Soffer : >> >>> On Sat, 12 May 2018, 11:32 Benny Zlotnik, wrote: >>> >>>> Using the auto-generated snapshot is generally a bad idea as it's >>>> inconsistent, >>>> >>> >>> What do you mean by inconsistant? >>> >>> >>> you should remove it before moving further >>>> >>>> On Fri, May 11, 2018 at 7:25 PM, Juan Pablo >>>> wrote: >>>> >>>>> I rebooted it with no luck, them I used the auto-gen snapshot , same >>>>> luck. >>>>> attaching the logs in gdrive >>>>> >>>>> thanks in advance >>>>> >>>>> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik : >>>>> >>>>>> I see here a failed attempt: >>>>>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb >>>>>> roker.auditloghandling.AuditLogDirector] >>>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-67) >>>>>> [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID: >>>>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz >>>>>> have failed to move disk mail02-int_Disk1 to domain 2penLA. >>>>>> >>>>>> Then another: >>>>>> 2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.dbb >>>>>> roker.auditloghandling.AuditLogDirector] >>>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] EVENT_ID: >>>>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz >>>>>> have failed to move disk mail02-int_Disk1 to domain 2penLA. >>>>>> >>>>>> Here I see a successful attempt: >>>>>> 2018-05-09 21:58:42,628-03 INFO [org.ovirt.engine.core.dal.dbb >>>>>> roker.auditloghandling.AuditLogDirector] (default task-50) >>>>>> [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: >>>>>> USER_MOVED_DISK(2,008), User admin@internal-authz moving disk >>>>>> mail02-int_Disk1 to domain 2penLA. >>>>>> >>>>>> >>>>>> Then, in the last attempt I see the attempt was successful but live >>>>>> merge failed: >>>>>> 2018-05-11 03:37:59,509-03 ERROR >>>>>> [org.ovirt.engine.core.bll.MergeStatusCommand] >>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) >>>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still >>>>>> in volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f, >>>>>> 52532d05-970e-4643-9774-96c31796062c] >>>>>> 2018-05-11 03:38:01,495-03 INFO [org.ovirt.engine.core.bll.Ser >>>>>> ialChildCommandsExecutionCallback] >>>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' >>>>>> (id: '115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child >>>>>> command id: '26bc52a4-4509-4577-b342-44a679bc628f' >>>>>> type:'RemoveSnapshot' to complete >>>>>> 2018-05-11 03:38:01,501-03 ERROR [org.ovirt.engine.core.bll.sna >>>>>> pshots.RemoveSnapshotSingleDiskLiveCommand] >>>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id: >>>>>> '4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status >>>>>> for step 'MERGE_STATUS' >>>>>> 2018-05-11 03:38:01,501-03 INFO [org.ovirt
[ovirt-users] Re: strange issue: vm lost info on disk
I removed the auto-snapshot and still no lucky. no bootable disk found. =( ideas? 2018-05-13 12:26 GMT-03:00 Juan Pablo : > benny, thanks for your reply: > ok, so the steps are : removing the snapshot on the first place. then what > do you suggest? > > > 2018-05-12 15:23 GMT-03:00 Nir Soffer : > >> On Sat, 12 May 2018, 11:32 Benny Zlotnik, wrote: >> >>> Using the auto-generated snapshot is generally a bad idea as it's >>> inconsistent, >>> >> >> What do you mean by inconsistant? >> >> >> you should remove it before moving further >>> >>> On Fri, May 11, 2018 at 7:25 PM, Juan Pablo >>> wrote: >>> >>>> I rebooted it with no luck, them I used the auto-gen snapshot , same >>>> luck. >>>> attaching the logs in gdrive >>>> >>>> thanks in advance >>>> >>>> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik : >>>> >>>>> I see here a failed attempt: >>>>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb >>>>> roker.auditloghandling.AuditLogDirector] >>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-67) >>>>> [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID: >>>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz >>>>> have failed to move disk mail02-int_Disk1 to domain 2penLA. >>>>> >>>>> Then another: >>>>> 2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.dbb >>>>> roker.auditloghandling.AuditLogDirector] >>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] EVENT_ID: >>>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz >>>>> have failed to move disk mail02-int_Disk1 to domain 2penLA. >>>>> >>>>> Here I see a successful attempt: >>>>> 2018-05-09 21:58:42,628-03 INFO [org.ovirt.engine.core.dal.dbb >>>>> roker.auditloghandling.AuditLogDirector] (default task-50) >>>>> [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: >>>>> USER_MOVED_DISK(2,008), User admin@internal-authz moving disk >>>>> mail02-int_Disk1 to domain 2penLA. >>>>> >>>>> >>>>> Then, in the last attempt I see the attempt was successful but live >>>>> merge failed: >>>>> 2018-05-11 03:37:59,509-03 ERROR >>>>> [org.ovirt.engine.core.bll.MergeStatusCommand] >>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) >>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in >>>>> volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f, >>>>> 52532d05-970e-4643-9774-96c31796062c] >>>>> 2018-05-11 03:38:01,495-03 INFO [org.ovirt.engine.core.bll.Ser >>>>> ialChildCommandsExecutionCallback] >>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id: >>>>> '115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id: >>>>> '26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to >>>>> complete >>>>> 2018-05-11 03:38:01,501-03 ERROR [org.ovirt.engine.core.bll.sna >>>>> pshots.RemoveSnapshotSingleDiskLiveCommand] >>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id: >>>>> '4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for >>>>> step 'MERGE_STATUS' >>>>> 2018-05-11 03:38:01,501-03 INFO [org.ovirt.engine.core.bll.sna >>>>> pshots.RemoveSnapshotSingleDiskLiveCommandCallback] >>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command >>>>> 'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364' >>>>> child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3, >>>>> 1c320f4b-7296-43c4-a3e6-8a868e23fc35, >>>>> a0e9e70c-cd65-4dfb-bd00-076c4e99556a]' >>>>> executions were completed, status 'FAILED' >>>>> 2018-05-11 03:38:02,513-03 ERROR [org.ovirt.engine.core.bll.sna >>>>> pshots.RemoveSnapshotSingleDiskLiveCommand] >>>>> (EE-ManagedThreadFactory-engineScheduled-T
[ovirt-users] Re: strange issue: vm lost info on disk
benny, thanks for your reply: ok, so the steps are : removing the snapshot on the first place. then what do you suggest? 2018-05-12 15:23 GMT-03:00 Nir Soffer : > On Sat, 12 May 2018, 11:32 Benny Zlotnik, wrote: > >> Using the auto-generated snapshot is generally a bad idea as it's >> inconsistent, >> > > What do you mean by inconsistant? > > > you should remove it before moving further >> >> On Fri, May 11, 2018 at 7:25 PM, Juan Pablo >> wrote: >> >>> I rebooted it with no luck, them I used the auto-gen snapshot , same >>> luck. >>> attaching the logs in gdrive >>> >>> thanks in advance >>> >>> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik : >>> >>>> I see here a failed attempt: >>>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal. >>>> dbbroker.auditloghandling.AuditLogDirector] >>>> (EE-ManagedThreadFactory-engineScheduled-Thread-67) >>>> [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID: >>>> USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz >>>> have failed to move disk mail02-int_Disk1 to domain 2penLA. >>>> >>>> Then another: >>>> 2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal. >>>> dbbroker.auditloghandling.AuditLogDirector] >>>> (EE-ManagedThreadFactory-engineScheduled-Thread-34) >>>> [] EVENT_ID: USER_MOVED_DISK_FINISHED_FAILURE(2,011), User >>>> admin@internal-authz have failed to move disk mail02-int_Disk1 to >>>> domain 2penLA. >>>> >>>> Here I see a successful attempt: >>>> 2018-05-09 21:58:42,628-03 INFO [org.ovirt.engine.core.dal. >>>> dbbroker.auditloghandling.AuditLogDirector] (default task-50) >>>> [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: >>>> USER_MOVED_DISK(2,008), User admin@internal-authz moving disk >>>> mail02-int_Disk1 to domain 2penLA. >>>> >>>> >>>> Then, in the last attempt I see the attempt was successful but live >>>> merge failed: >>>> 2018-05-11 03:37:59,509-03 ERROR >>>> [org.ovirt.engine.core.bll.MergeStatusCommand] >>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) >>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in >>>> volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f, >>>> 52532d05-970e-4643-9774-96c31796062c] >>>> 2018-05-11 03:38:01,495-03 INFO [org.ovirt.engine.core.bll. >>>> SerialChildCommandsExecutionCallback] >>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id: >>>> '115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id: >>>> '26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to >>>> complete >>>> 2018-05-11 03:38:01,501-03 ERROR [org.ovirt.engine.core.bll.snapshots. >>>> RemoveSnapshotSingleDiskLiveCommand] >>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id: >>>> '4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for >>>> step 'MERGE_STATUS' >>>> 2018-05-11 03:38:01,501-03 INFO [org.ovirt.engine.core.bll.snapshots. >>>> RemoveSnapshotSingleDiskLiveCommandCallback] >>>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) >>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command >>>> 'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364' >>>> child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3, >>>> 1c320f4b-7296-43c4-a3e6-8a868e23fc35, >>>> a0e9e70c-cd65-4dfb-bd00-076c4e99556a]' >>>> executions were completed, status 'FAILED' >>>> 2018-05-11 03:38:02,513-03 ERROR [org.ovirt.engine.core.bll.snapshots. >>>> RemoveSnapshotSingleDiskLiveCommand] >>>> (EE-ManagedThreadFactory-engineScheduled-Thread-2) >>>> [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Merging of snapshot >>>> '319e8bbb-9efe-4de4-a9a6-862e3deb891f' images '52532d05-970e-4643-9774- >>>> 96c31796062c'..'5d9d2958-96bc-49fa-9100-2f33a3ba737f' failed. Images >>>> have been marked illegal and can no longer be previewed or reverted to. >>>> Please retry Live Merge on the snapshot to complete the operation. >>>> 2018-05-11 03:38:02
[ovirt-users] Re: strange issue: vm lost info on disk
I rebooted it with no luck, them I used the auto-gen snapshot , same luck. attaching the logs in gdrive thanks in advance 2018-05-11 12:50 GMT-03:00 Benny Zlotnik : > I see here a failed attempt: > 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb > roker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-67) > [bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID: > USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz have > failed to move disk mail02-int_Disk1 to domain 2penLA. > > Then another: > 2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.dbb > roker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-34) > [] EVENT_ID: USER_MOVED_DISK_FINISHED_FAILURE(2,011), User > admin@internal-authz have failed to move disk mail02-int_Disk1 to domain > 2penLA. > > Here I see a successful attempt: > 2018-05-09 21:58:42,628-03 INFO [org.ovirt.engine.core.dal.dbb > roker.auditloghandling.AuditLogDirector] (default task-50) > [940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: USER_MOVED_DISK(2,008), > User admin@internal-authz moving disk mail02-int_Disk1 to domain 2penLA. > > > Then, in the last attempt I see the attempt was successful but live merge > failed: > 2018-05-11 03:37:59,509-03 ERROR > [org.ovirt.engine.core.bll.MergeStatusCommand] > (EE-ManagedThreadFactory-commandCoordinator-Thread-2) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in > volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f, > 52532d05-970e-4643-9774-96c31796062c] > 2018-05-11 03:38:01,495-03 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] > (EE-ManagedThreadFactory-engineScheduled-Thread-51) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id: > '115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id: > '26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to complete > 2018-05-11 03:38:01,501-03 ERROR [org.ovirt.engine.core.bll.snapshots. > RemoveSnapshotSingleDiskLiveCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-51) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id: > '4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for > step 'MERGE_STATUS' > 2018-05-11 03:38:01,501-03 INFO [org.ovirt.engine.core.bll.snapshots. > RemoveSnapshotSingleDiskLiveCommandCallback] > (EE-ManagedThreadFactory-engineScheduled-Thread-51) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command > 'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364' > child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3, > 1c320f4b-7296-43c4-a3e6-8a868e23fc35, a0e9e70c-cd65-4dfb-bd00-076c4e99556a]' > executions were completed, status 'FAILED' > 2018-05-11 03:38:02,513-03 ERROR [org.ovirt.engine.core.bll.snapshots. > RemoveSnapshotSingleDiskLiveCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-2) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Merging of snapshot > '319e8bbb-9efe-4de4-a9a6-862e3deb891f' images '52532d05-970e-4643-9774- > 96c31796062c'..'5d9d2958-96bc-49fa-9100-2f33a3ba737f' failed. Images have > been marked illegal and can no longer be previewed or reverted to. Please > retry Live Merge on the snapshot to complete the operation. > 2018-05-11 03:38:02,519-03 ERROR [org.ovirt.engine.core.bll.snapshots. > RemoveSnapshotSingleDiskLiveCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-2) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command > 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' > with failure. > 2018-05-11 03:38:03,530-03 INFO [org.ovirt.engine.core.bll. > ConcurrentChildCommandsExecutionCallback] > (EE-ManagedThreadFactory-engineScheduled-Thread-37) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'RemoveSnapshot' id: > '26bc52a4-4509-4577-b342-44a679bc628f' child commands > '[4936d196-a891-4484-9cf5-fceaafbf3364]' executions were completed, > status 'FAILED' > 2018-05-11 03:38:04,548-03 ERROR > [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-66) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command > 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure. > 2018-05-11 03:38:04,557-03 INFO > [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-66) > [d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Lock freed to object > 'EngineLock:{exclusiveLocks='[4808bb70-c9cc-4286-aa39- > 16b5798213ac=LIVE_STORAGE_MIGRATI
[ovirt-users] strange issue: vm lost info on disk
Hi! , Im strugled about an ongoing problem: after migrating a vm's disk from an iscsi domain to a nfs and ovirt reporting the migration was successful, I see there's no data 'inside' the vm's disk. we never had this issues with ovirt so Im stranged about the root cause and if theres a chance of recovering the information. can you please help me out troubleshooting this one? I would really appreciate it =) running ovirt 4.2.1 here! thanks in advance, JP ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org
Re: [ovirt-users] Which hardware are you using for oVirt
Andy, Im using 2 node cluster: -2 supermicro 6017 (2 Intel 2420(12C24T each node) 384Gb ram total, 10gbe. all hosted engine via nfs storage side: 2 SC836BE16-R1K28B(192gb arc cache) with raid 10 zfs+intel slog serving iscsi at 10Gbe 80 VM's more or less. regards, 2018-03-25 4:36 GMT-03:00 Andy Michielsen : > Hello Alex, > > Thanks for sharing. Much appriciated. > > I believe my setup would need 96 Gb off RAM in each host, and would need > about at least 3 Tb of storage. Probably 4 Tb would be beter if I want to > work with snapshots. (Will be running mostly windows 2016 servers or > windows 10 desktops with 6Gb off RAM and 100 Gb of disks) > > I agree that a 10 Gb network for storage would be very beneficial. > > Now If I can figure out how to set up a glusterfs on a 3 node cluster in > oVirt 4.2 just for the data storage. I ‘m golden to get started. :-) > > Kind regards. > > On 24 Mar 2018, at 20:08, Alex K wrote: > > I have 2 or 3 node clusters with following hardware (all with self-hosted > engine) : > > 2 node cluster: > RAM: 64 GB per host > CPU: 8 cores per host > Storage: 4x 1TB SAS in RAID10 > NIC: 2x Gbit > VMs: 20 > > The above, although I would like to have had a third NIC for gluster > storage redundancy, it is running smoothly for quite some time and without > performance issues. > The VMs it is running are not high on IO (mostly small Linux servers). > > 3 node clusters: > RAM: 32 GB per host > CPU: 16 cores per host > Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space > without purchasing extra disks) > NIC: 6x Gbit > VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10) > > For your setup (30 VMs) I would rather go with RAID10 SAS disks and at > least a dual 10Gbit NIC dedicated to the gluster traffic only. > > Alex > > > On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen < > andy.michiel...@gmail.com> wrote: > >> Hello Andrei, >> >> Thank you very much for sharing info on your hardware setup. Very >> informative. >> >> At this moment I have my ovirt engine on our vmware environment which is >> fine for good backup and restore. >> >> I have 4 nodes running now all different in make and model with local >> storage and it works but lacks performance a bit. >> >> But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2 >> quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s >> harddisks. This isn’t bad but I will add more RAM for starters. Also I >> would like to have some good redundant storage for this too and the servers >> have limited space to add that. >> >> Hopefully others will also share there setups and expirience like you did. >> >> Kind regards. >> >> On 24 Mar 2018, at 10:35, Andrei Verovski wrote: >> >> Hi, >> >> HL ProLiant DL380, dual Xeon >> 120 GB RAID L1 for system >> 2 TB RAID L10 for VM disks >> 5 VMs, 3 Linux, 2 Windows >> Total CPU load most of the time is low, high level of activity related >> to disk. >> Host engine under KVM appliance on SuSE, can be easily moved, backed up, >> copied, experimented with, etc. >> >> You'll have to use servers with more RAM and storage than main. >> More then one NIC required if some of your VMs are on different subnets, >> e.g. 1 in internal zone and 2nd on DMZ. >> For your setup 10 GB NICs + L3 Switch for ovirtmgmt. >> >> BTW, I would suggest to have several separate hardware RAIDs unless you >> have SSD, otherwise limit of the disk system I/O will be a bottleneck. >> Consider SSD L1 RAID for heavy-loaded databases. >> >> *Please note many cheap SSDs do NOT work reliably with SAS controllers >> even in SATA mode*. >> >> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for >> OS. >> It was possible to install system, yet under heavy load simulated with >> iozone disk system freeze, rendering OS unbootable. >> Same crash was experienced with 512GB KingFast SSD connected to >> broadcom/AMCC SAS RAID Card. >> >> >> On 03/24/2018 10:33 AM, Andy Michielsen wrote: >> >> Hi all, >> >> Not sure if this is the place to be asking this but I was wondering which >> hardware you all are using and why in order for me to see what I would be >> needing. >> >> I would like to set up a HA cluster consisting off 3 hosts to be able to run >> 30 vm’s. >> The engine, I can run on an other server. The hosts can be fitted with the >> storage and share the space through glusterfs. I would think I will be >> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s >> sufficient ?) >> >> Any input you guys would like to share would be greatly appriciated. >> >> Thanks, >> ___ >> Users mailing >> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > ___ > Users mailing list > Users@o
Re: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain
as soon as you install ovirt when you add a second storage domain, it should switch to it as the master, not hosted storage as you have. are both nfs? JP 2018-03-17 15:45 GMT-03:00 Maton, Brett : > I have added a new domain, but it's not playing ball. > > hosted_storage Data (master) Active > vm_storage Data Active > > On 17 March 2018 at 17:24, Juan Pablo wrote: > >> you need to configure first a domain (can be temporary) then, import the >> storage domain. >> >> >> >> 2018-03-17 7:13 GMT-03:00 Maton, Brett : >> >>> I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean CentOS >>> 7.4 host, all went fine however >>> >>> Now that the hosted engine is up, I've added a data storage domain but >>> it looks like the next step that detects / promotes the new data domain to >>> master isn't being triggered. >>> >>> Hosted engine and data domain are on NFS storage, /var/log/messages is >>> filling with these messages >>> >>> journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.h >>> osted_engine.HostedEngine.config ERROR Unable to identify the OVF_STORE >>> volume, falling back to initial vm.conf. Please ensure you already added >>> your first data domain for regular VMs >>> >>> >>> I ran into the same issue when I imported an existing domain. >>> >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain
you need to configure first a domain (can be temporary) then, import the storage domain. 2018-03-17 7:13 GMT-03:00 Maton, Brett : > I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean CentOS > 7.4 host, all went fine however > > Now that the hosted engine is up, I've added a data storage domain but it > looks like the next step that detects / promotes the new data domain to > master isn't being triggered. > > Hosted engine and data domain are on NFS storage, /var/log/messages is > filling with these messages > > journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent. > hosted_engine.HostedEngine.config ERROR Unable to identify the OVF_STORE > volume, falling back to initial vm.conf. Please ensure you already added > your first data domain for regular VMs > > > I ran into the same issue when I imported an existing domain. > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Storage Performance
Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a while =) Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong? regards, JP 2017-10-25 12:05 GMT-03:00 Bryan Sockel : > Have a question in regards to storage performance. I have a gluster > replica 3 volume that we are testing for performance. In my current > configuration is 1 server has 16X1.2TB( 10K 2.5 Inch) drives configured in > Raid 10 with a 256k stripe. My 2nd server is configured with 4X6TB (3.5 > Inch Drives) configured Raid 10 with a 256k stripe. Each server is > configured with 802.3 Bond (4X1GB) network links. Each server is > configured with write-back on the raid controller. > > I am seeing a lot of network usage (solid 3 Gbps) when i perform file > copies on the vm attached to that gluster volume, But i see spikes on the > disk io when watching the dashboard through the cockpit interface. I > spikes are up to 1.5 Gbps, but i would say the average through put is maybe > 256 Mbps. > > Is this to be expected, or should it be a solid activity in the graphs for > disk IO. Is it better to use a 256K stripe or a 512 strip on the hardware > raid configuration? > > Eventually i plan on having the hardware match up for better performance. > > > Thanks > > Bryan > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Problems to log into portal
Thanks Yedidyah for your reply. The FQDN is correct: /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf:ENGINE_FQDN=ovirt01.tecnica.tnu.com.uy I managed to connect to the admin portal using a ssh tunnel, but I can't do it via a regular connection. Any other hint? Regards, El 23/10/17 a las 02:40, Yedidyah Bar David escribió: > On Mon, Oct 23, 2017 at 2:04 AM, Juan Pablo Lorier wrote: >> Hi, >> >> I'm trying to log into a fresh install (4.1 on centos 7) but though I've >> set the hostname-ip mapping in /etc/hosts of the server and my desktop, >> it keep complaining with the error: >> >> The client is not authorized to request an authorization. It's required >> to access the system using FQDN. >> >> Engine log shows the same error: >> >> ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-9) [] The >> client is not authorized to request an authorization. It's required to >> access the system using FQDN >> >> What else should I do to get access? > Do you use the exact same name you provided when prompted by engine-setup? > > Check this: > > grep ^ENGINE_FQDN /etc/ovirt-engine/engine.conf.d/*.conf > > If you want/need to use a different name, search the list archives > for "SSO_ALTERNATE_ENGINE_FQDNS". > > Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Problems to log into portal
Hi, I'm trying to log into a fresh install (4.1 on centos 7) but though I've set the hostname-ip mapping in /etc/hosts of the server and my desktop, it keep complaining with the error: The client is not authorized to request an authorization. It's required to access the system using FQDN. Engine log shows the same error: ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-9) [] The client is not authorized to request an authorization. It's required to access the system using FQDN What else should I do to get access? Regards ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt and FreeNAS
Using it here on production since the last 3 years, no problems so far. iscsi and nfs shares. 2 servers on supermicro x10srl with xeon E5-2603 v3 @ 1.60GHz , 128Gb ram each, 2 intel ssd for zil 2 ssd for l2arc, 16 sata disks. using ibm 1015hba's flashed as 'it' mode. I had to tune it a lot to get best performance both on the nodes and on the storage side, sysctl. no rule of tumb. also many changes at hand to get full multipath, failover, queue depth, reqs, io schedulers, etc. network cards, all Intel. regards, 2017-08-15 5:50 GMT-03:00 Latchezar Filtchev : > Dear oVirt-ers, > > > > Just curious – did someone uses FreeNAS as storage for oVirt. My staging > environment is - two virtualization nodes, hosted engine, FreeNAS as > storage (iSCSI hosted storage, iSCSI Data(Master) domain and NFS shares as > ISO and export domains) > > > > Thank you! > > > > Best, > > Latcho > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] production DC non responsive
Fred, based on the issue and the time constrains I had to failover to another dc and recovered the primary site from a backup. I appreciate your help, JP 2017-08-09 4:21 GMT-03:00 Fred Rolland : > Can you provide engine and VDSM logs ? > > On Tue, Aug 8, 2017 at 8:17 AM, Juan Pablo > wrote: > >> Hi guys, good morning/night . >> first of all thanks to all the community and the whole team, you are >> doing a great effort . >> >> Today Im having an issue with ovirt 4.1.2 running as hosted engine over >> nfs. and data storage over iscsi. >> I found out of the problem when I tried to migrate one vm to another host >> and I got an error, so I powered off the vm's on that host and started them >> on the backup host with no problem, then I set the first host into >> mainteinance mode and after some minutes I restarted it(as usual). when it >> came back online, the whole dc turned RED as unavailable. >> >> can anyone please help me out? >> >> thanks, JP >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] production DC non responsive
Hi guys, good morning/night . first of all thanks to all the community and the whole team, you are doing a great effort . Today Im having an issue with ovirt 4.1.2 running as hosted engine over nfs. and data storage over iscsi. I found out of the problem when I tried to migrate one vm to another host and I got an error, so I powered off the vm's on that host and started them on the backup host with no problem, then I set the first host into mainteinance mode and after some minutes I restarted it(as usual). when it came back online, the whole dc turned RED as unavailable. can anyone please help me out? thanks, JP ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] problems migrating to hosted engine from bare metal
hosted engine lives on the hosted engine share you defined at the begining . it does not migrate later to another storage, if you want that you need to re-deploy. hows your hardware and logical setup? will it change ? 2017-06-13 10:59 GMT-03:00 cmc : > Hi, > > I created a new host to deploy a hosted engine, and then used a backup > from the bare metal engine and restored this, as per the procedure in: > > http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/ > > Everything worked fine up until step 15 ('Continue setup') as the > script said the engine was not responding. I tried the reboot option > (option 3), but still it would not connect. So I could not do the > final step involving the internal CA, adding the host to an existing > cluster (of which there were two other hosts). I was able to connect > via vnc and ssh fine to the engine, and from here I could see that the > ovirt-engine service was up. I had to install the aaa-ldap extension > package to enable ldap auth separately however, but once done I was > able to log in, and it showed the old cluster as it was on the bare > metal engine. I added the host that I created the hosted engine on, > and it installed various packages and then I configured the network > and it looked fine, apart from the fact that I could not see a VM > named 'HostedEngine' in the list of VMs. I think however that this was > not a properly working setup, as the NFS storage I used to setup the > hosted engine became unavailable and I think this killed the hosted > engine, which caused it to reboot the host it was on. The hosted > engine has not come back since then, so I'm guessing it either isn't > properly set up for HA or it needs the NFS storage or something else > was not properly done by me in the setup. I've restarted the bare > metal engine for now as I needed it running for now. > > My questions are: > > 1. My understanding is that the NFS storage is initially used to > create the hosted engine disk image, and is temporary, and that the > hosted engine later gets migrated to the storage used by the rest of > the cluster (which in my case is directly attached to the hosts via > fibre channel). I suspect that this did not happen. The bare metal > engine had some local ISO storage (on a hard disk local to it), which > will not be replicated to the hosted engine VM - will this cause a > problem for the deployment? I can create some new ISO storage later if > not. > > 2. What is the recommended way to recover from this situation? Should > I just run 'hosted-engine --deploy' again and try and find out what is > going wrong at step 15? > > I can probably get the disk image that was on NFS and mount it to find > out what went wrong on the initial deployment, or I can run the > deployment again and then get the log when it fails at step 15. > > Ovirt version was 4.1.2.2 > > Thanks for any help, > > Cam > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] default login info for repository images
have you followed : https://www.ovirt.org/documentation/install-guide/Installation_Guide directions? 2017-06-13 14:31 GMT-03:00 Dan Sullivan : > I've created a VM using a template imported from a Fedora 24 image in the > repository. Now that it is created, what credentials do I use to log into > it from the console? > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] New installation
David, if you are not centralizing on a storage[nfs/iscsi], you should deploy gluster to make the disks available to all the virt hosts. regards, 2017-06-13 14:43 GMT-03:00 david caughey : > Hi Folks, > > I just got the go ahead to install oVirt to use as a lab. > The servers are: > 1xdl360 4x30010k in raid 10 for OS (Manager) should this be clustered with > another server for resiliency > 5xdl380 2x30010k for OS 6x1TB7.2k RAID5 on each for a data store > It will all be behind a proxy and be connected with 1GB links, (2x4 bonds) > > My plan is to have 1 manager as this is a lab scenario and not production, > (yet), or is it better to have a cluster. > > Is it ok to have a data store on each host, (they won't let me have > dedicated storage yet)? > Is it wise to share these data_stores between hosts or should 1 store be > dedicated to each host individually? > > There is no need for performance, basically as long as it runs it will work. > The final intention is to have templates for OpenStack and CEPH plus lots of > Linux examples set up with SDN etc. > We have a 90% Windows shop but it is all rapidly changing to OpenStack et al > and I want to try and give people the opportunity to use Linux in the lab > before they are let lose in production. > Plus it gives me the chance to show management exactly what ovirt can do. > > Any help, advice or links would be greatly appreciated, > > BR/David > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Seamless SAN HA failovers with oVirt?
Chris, if you have active-active with multipath: you upgrade one system, reboot it, check it came active again, then upgrade the other. -seamless. -no service interruption. -not locked to any storage solution. multipath was designed exactly for that. 2017-06-06 11:03 GMT-03:00 Chris Adams : > Once upon a time, Juan Pablo said: > > Im saying you can do it with multipath and not rely on truenas/freenas. > > with an active/active configuration on the virt side...instead of > > active/passive on the storage side. > > But there's still only one active system (the active TrueNAS node) > connected to the hard drives, and the only way to upgrade is to reboot > it. Multipath doesn't bypass that. > > -- > Chris Adams > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Seamless SAN HA failovers with oVirt?
Im saying you can do it with multipath and not rely on truenas/freenas. with an active/active configuration on the virt side...instead of active/passive on the storage side. 2017-06-06 10:44 GMT-03:00 Chris Adams : > Once upon a time, Juan Pablo said: > > I think its not related to something on the trueNAS side. if you are > using > > iscsi multipath you should be using round-robin > > TrueNAS HA is active/standby, so multipath has nothing to do with > rebooting/upgrading a TrueNAS. > > -- > Chris Adams > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Seamless SAN HA failovers with oVirt?
I think its not related to something on the trueNAS side. if you are using iscsi multipath you should be using round-robin , if one of the paths goes down you still have the other path with your information., so no sanlock . unfortunately if you want iscsi mpath on ovirt, its prefered to edit the config by hand and test. also, with multipath, you can tell the os to 'stop using' one of the paths( represented as a disk). so, for example: multipath -ll should* be looking like this: 36589cfc0034968eacf965e3c dm-17 FreeNAS ,iSCSI Disk size=50G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=enabled |- 31:0:0:5 sdl 8:176 failed faulty running |- 32:0:0:5 sdm 8:192 failed faulty running |- 35:0:0:5 sdo 8:224 failed faulty running `- 34:0:0:5 sdn 8:208 failed faulty running and working correctly like this: 36589cfc00ee205ed6757fa724bac dm-2 FreeNAS ,iSCSI Disk size=5.5T features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 13:0:0:10 sdi 8:128 active ready running |- 15:0:0:10 sdk 8:160 active ready running |- 28:0:0:10 sdg 8:96 active ready running `- 29:0:0:10 sdj 8:144 active ready running (yes, they are different ones, I wont disconnect one path just to show an example =) ) hope I clarified a bit. cant tell how it would work on nfs or if it works at all. 2017-06-06 10:06 GMT-03:00 Chris Adams : > Once upon a time, Sven Achtelik said: > > I was failing over by rebooting one of the TrueNas nodes and this took > some time for the other node to take over. I was thinking about asking the > TN guys if there is a command or procedure to speed up the failover. > > That's the way TrueNAS failover works; there is no "graceful" failover, > you just reboot the active node. > > -- > Chris Adams > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] High latency on storage domains and sanlock renewal error
can you please give the output of: multipath -ll and iscsiadm -m session -P3 JP 2017-05-13 6:48 GMT-03:00 Stefano Bovina : > Hi, > > 2.6.32-696.1.1.el6.x86_64 > 3.10.0-514.10.2.el7.x86_64 > > I tried ioping test from different group of servers using multipath, > members of different storage group (different lun, different raid ecc), and > everyone report latency. > I tried the same test (ioping) on a server with powerpath instead of > multipath, with a dedicated raid group and ioping don't report latency. > > > 2017-05-13 2:00 GMT+02:00 Juan Pablo : > >> sorry to jump in, but what kernel version are you using? had similar >> issue with kernel 4.10/4.11 >> >> >> 2017-05-12 16:36 GMT-03:00 Stefano Bovina : >> >>> Hi, >>> a little update: >>> >>> The command multipath -ll hung when executed on the host while the >>> problem occur (nothing logged in /var/log/messages or dmesg). >>> >>> I tested latency with ioping: >>> ioping /dev/6a386652-629d-4045-835b-21d2f5c104aa/metadata >>> >>> Usually it return "time=15.6 ms", sometimes return "time=19 s" (yes, >>> seconds) >>> >>> Systems are up to date and I tried both path_checker (emc_clariion and >>> directio), without results. >>> (https://access.redhat.com/solutions/139193, it refers to the Rev A31 >>> of EMC document; last is A42 and suggest emc_clariion). >>> >>> Any idea or suggestion? >>> >>> Thanks, >>> >>> Stefano >>> >>> 2017-05-08 11:56 GMT+02:00 Yaniv Kaul : >>> >>>> >>>> >>>> On Mon, May 8, 2017 at 11:50 AM, Stefano Bovina >>>> wrote: >>>> >>>>> Yes, >>>>> this configuration is the one suggested by EMC for EL7. >>>>> >>>> >>>> https://access.redhat.com/solutions/139193 suggest that for alua, the >>>> patch checker needs to be different. >>>> >>>> Anyway, it is very likely that you have storage issues - they need to >>>> be resolved first and I believe they have little to do with oVirt at the >>>> moment. >>>> Y. >>>> >>>> >>>>> >>>>> By the way, >>>>> "The parameters rr_min_io vs. rr_min_io_rq mean the same thing but are >>>>> used for device-mapper-multipath on differing kernel versions." and >>>>> rr_min_io_rq default value is 1, rr_min_io default value is 1000, so it >>>>> should be fine. >>>>> >>>>> >>>>> 2017-05-08 9:39 GMT+02:00 Yaniv Kaul : >>>>> >>>>>> >>>>>> On Sun, May 7, 2017 at 1:27 PM, Stefano Bovina >>>>>> wrote: >>>>>> >>>>>>> Sense data are 0x0/0x0/0x0 >>>>>> >>>>>> >>>>>> Interesting - first time I'm seeing 0/0/0. The 1st is usually 0x2 >>>>>> (see [1]), and then the rest [2], [3] make sense. >>>>>> >>>>>> A google search found another user with Clarion with the exact same >>>>>> error[4], so I'm leaning toward misconfiguration of multipathing/clarion >>>>>> here. >>>>>> >>>>>> Is your multipathing configuration working well for you? >>>>>> Are you sure it's a EL7 configuration? For example, I believe you >>>>>> should have rr_min_io_rq and not rr_min_io . >>>>>> Y. >>>>>> >>>>>> [1] http://www.t10.org/lists/2status.htm >>>>>> [2] http://www.t10.org/lists/2sensekey.htm >>>>>> [3] http://www.t10.org/lists/asc-num.htm >>>>>> [4] http://www.linuxquestions.org/questions/centos-111/multi >>>>>> path-problems-4175544908/ >>>>>> >>>>> >>>>> >>>> >>> >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] High latency on storage domains and sanlock renewal error
sorry to jump in, but what kernel version are you using? had similar issue with kernel 4.10/4.11 2017-05-12 16:36 GMT-03:00 Stefano Bovina : > Hi, > a little update: > > The command multipath -ll hung when executed on the host while the problem > occur (nothing logged in /var/log/messages or dmesg). > > I tested latency with ioping: > ioping /dev/6a386652-629d-4045-835b-21d2f5c104aa/metadata > > Usually it return "time=15.6 ms", sometimes return "time=19 s" (yes, > seconds) > > Systems are up to date and I tried both path_checker (emc_clariion and > directio), without results. > (https://access.redhat.com/solutions/139193, it refers to the Rev A31 of > EMC document; last is A42 and suggest emc_clariion). > > Any idea or suggestion? > > Thanks, > > Stefano > > 2017-05-08 11:56 GMT+02:00 Yaniv Kaul : > >> >> >> On Mon, May 8, 2017 at 11:50 AM, Stefano Bovina wrote: >> >>> Yes, >>> this configuration is the one suggested by EMC for EL7. >>> >> >> https://access.redhat.com/solutions/139193 suggest that for alua, the >> patch checker needs to be different. >> >> Anyway, it is very likely that you have storage issues - they need to be >> resolved first and I believe they have little to do with oVirt at the >> moment. >> Y. >> >> >>> >>> By the way, >>> "The parameters rr_min_io vs. rr_min_io_rq mean the same thing but are >>> used for device-mapper-multipath on differing kernel versions." and >>> rr_min_io_rq default value is 1, rr_min_io default value is 1000, so it >>> should be fine. >>> >>> >>> 2017-05-08 9:39 GMT+02:00 Yaniv Kaul : >>> On Sun, May 7, 2017 at 1:27 PM, Stefano Bovina wrote: > Sense data are 0x0/0x0/0x0 Interesting - first time I'm seeing 0/0/0. The 1st is usually 0x2 (see [1]), and then the rest [2], [3] make sense. A google search found another user with Clarion with the exact same error[4], so I'm leaning toward misconfiguration of multipathing/clarion here. Is your multipathing configuration working well for you? Are you sure it's a EL7 configuration? For example, I believe you should have rr_min_io_rq and not rr_min_io . Y. [1] http://www.t10.org/lists/2status.htm [2] http://www.t10.org/lists/2sensekey.htm [3] http://www.t10.org/lists/asc-num.htm [4] http://www.linuxquestions.org/questions/centos-111/multi path-problems-4175544908/ >>> >>> >> > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Network Performance
ok, so those numbers are not bad. just to check, can you please verify same test from x server to vm? (server different than host vm please ). regards, JP 2017-05-10 16:21 GMT-03:00 Bryan Sockel : > Hi Juan, > > Currently we are seeing the lag/delay in the VM's. The slowness comes from > when we access applications from the network vs. locally. For instance > Putty opens quickly when run locally, but when run from the network it may > take a minute or so to launch. > > I am watching the Nload graph on the physical server and not seeing > any/minimal traffic go out on the vlan the vm is running on. > > Currently my bonding options are setup as follows: > > BONDING_OPTS='mode=4 miimon=1' > > I will be changing the options to: > mode=4 miimon=100 xmit_hash_policy=2 > > > > Iperf Host to host before Bonding Options change > [ 4] local 10.20.101.181 port 5001 connected with 10.20.101.183 port 54892 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 1.09 GBytes 935 Mbits/sec > > Host to vm Before Bonding Options Change > > [ 3] local 10.20.101.207 port 33142 connected with 10.20.101.181 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-10.0 sec 23.6 GBytes 20.2 Gbits/sec > > Host to Gluster Servers > > [ 5] local 10.20.101.181 port 5001 connected with 10.20.101.185 port 51588 > [ 5] 0.0-10.0 sec 1.07 GBytes 915 Mbits/sec > [ 4] local 10.20.101.181 port 5001 connected with 10.20.101.187 port 45548 > [ 4] 0.0-10.0 sec 946 MBytes 790 Mbits/sec > > > After Bonding Changes: > > Host to Host > [ 3] local 10.20.101.183 port 57656 connected with 10.20.101.181 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-10.0 sec 1.04 GBytes 897 Mbits/sec > > Host to VM > > [ 3] local 10.20.101.207 port 43686 connected with 10.20.101.181 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-10.0 sec 26.2 GBytes 22.5 Gbits/sec > > Host to Storage > [ 3] local 10.20.101.185 port 51590 connected with 10.20.101.181 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-10.0 sec 1.02 GBytes 876 Mbits/sec > > > VM was running on the same host i was testing the performance against. > > > After further testing and investigation i have noticed that Kaspersky AV > maybe the main factor. > > Thanks > > > > -Original Message- > From: Juan Pablo > To: Bryan Sockel > Cc: "users@ovirt.org" > Date: Wed, 10 May 2017 14:14:13 -0300 > Subject: Re: [ovirt-users] Network Performance > > Bryan, could you please elaborate your setup? do you see lag on your > virt-host or in your VM's? what have you tried so far to test? can you > please run: > "iperf -s" on your server (or where do you see lagg) > and "iperf -c $serveripaddress" replace $serveripaddress with your > interface IP. > and paste your output? > can you also descrie if this is a layer2 or layer 3 network? > > regards, > JP > > 2017-05-10 13:05 GMT-03:00 Bryan Sockel : >> >> I am doing some testing with our current ovirt setup and i am seeing some >> lagging going on when i attempt to launch or access files from a network >> share, or even run windows updates. >> >> My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached. >> Server usage is currently low. I have also not setup any additional Network >> QoS and everything else is set to default. >> >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Network Performance
Bryan, could you please elaborate your setup? do you see lag on your virt-host or in your VM's? what have you tried so far to test? can you please run: "iperf -s" on your server (or where do you see lagg) and "iperf -c $serveripaddress" replace $serveripaddress with your interface IP. and paste your output? can you also descrie if this is a layer2 or layer 3 network? regards, JP 2017-05-10 13:05 GMT-03:00 Bryan Sockel : > I am doing some testing with our current ovirt setup and i am seeing some > lagging going on when i attempt to launch or access files from a network > share, or even run windows updates. > > My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached. > Server usage is currently low. I have also not setup any additional Network > QoS and everything else is set to default. > > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] do we need some documentation mainteiners?
I am! =) but Im dizzy on how/where to start. suggestions/ideas are welcome JP El 10/5/2017 12:20, "Barak Korren" escribió: > On 10 May 2017 at 14:18, Juan Pablo wrote: > > this was not the intention of the thread. so please stop throwing bananas > > each other. > > > > Lets focus on the subject here:do we need some documentation mainteiners? > > yes/no > > My answer would be definitely. > If you are volunteering, you would be very welcome. > > > -- > Barak Korren > RHV DevOps team , RHCE, RHCi > Red Hat EMEA > redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] do we need some documentation mainteiners?
this was not the intention of the thread. so please stop throwing bananas each other. Lets focus on the subject here:do we need some documentation mainteiners? yes/no regards, JP 2017-05-10 5:27 GMT-03:00 Fabrice Bacchella : > I'm might be saying things in a harsh way, but there is a real big problem > with public documentation at ovirt, especially the latest python sdk. > > I'm coding a python wrapper around the sdk4 and fighting a lot with > documentation. > > First I tried to google: > ovirt python sdk4 > ovirt python sdk 4 > ovirt rest api 4 > > I was finally given http://ovirt.github.io/ovirt-engine-sdk/master, but > http://ovirt.github.io returns a 404, the sdk is hidden under > http://ovirt.github.io/ovirt-engine-sdk/, a mini-site that is barely > usable, as it very hard to navigate between page. For example, when you're > at : http://ovirt.github.io/ovirt-engine-sdk/master/version.m.html, how > do you go to the first page ? > > If I try to gogle the title of this page, I found http://www.ovirt.org/ > develop/release-management/features/infra/python-sdk/ as first result and > it's even not in the first page of results . > > Or you can have a look at https://access.redhat.com/ > documentation/en-us/red_hat_virtualization/4.1/html- > single/python_sdk_guide/, that talks only about v3 sdk, which is > deprecated. > > The python sdk change log: > http://www.ovirt.org/develop/sdk/python-sdk-changelog/ > > At pypi, no links to the sdk documentation > https://pypi.python.org/pypi/ovirt-engine-sdk-python > > I requested the events codes, I was given that: > https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/ > common/src/main/java/org/ovirt/engine/core/common/AuditLogType.java#L726 > > Is there a formal list somewhere ? > > Le 10 mai 2017 à 08:21, Oved Ourfali a écrit : > > > > On Tue, May 9, 2017 at 11:46 PM, Fabrice Bacchella orange.fr> wrote: > >> I'm whining because ovirt is a wonderful product, peoples behind are >> nice, but ho boy, what about the execution ! >> >> > I couldn't agree more with James here. > It is good to whine, complain, and seek for improvements. We all want a > better project! > However, there are pleasant ways to express your thoughts and concerns. > You seem to choose unpleasant ones, which are also unproductive and > insulting to others. > That can result in people not taking your thoughts and concerns in a > serious way, as it is just unpleasant for them to read and reply to. > I remember someone already sent you the community code of conduct [1]. I > suggest you read that again before sharing your next post with us. > > Oved > > [1] https://www.ovirt.org/community/about/community-guidelines/ > > And no I have done much more than empty complains as I have open my share >> a bug report, written a blog entry about using ovirt+kerberos+SSO, written >> a full fledged CLI and trying to finished the SDK. So I'm not just >> complaining. >> >> Le 9 mai 2017 à 22:28, James Michels a >> écrit : >> >> Seriously, what's wrong with you? I've being reading your comments for >> some time and the only things I see are whining, unproductive complaining >> and disrespectful comments. >> >> Please, stop it. There are tons of ways to say something, and the way you >> use is insulting for dozens of people developing this project. Most people >> here are seeking help and/or trying to be useful aiding others in what they >> can. If you don't like oVirt or you have that much complains maybe you >> should start your project yourself and do things like you consider they >> should be done. >> >> James >> >> 2017-05-09 15:59 GMT+01:00 Fabrice Bacchella > >: >> >>> The documentation is alway a good laugh at ovirt. Look for RHEL instead. >>> >>> Le 9 mai 2017 à 16:13, Juan Pablo a écrit : >>> >>> Team, Is just me or the documentation pages are not being updated ? many >>> are outdated.. how can we collaborate? >>> >>> whats up with http://www.ovirt.org/documentation/admin-guide/ ? >>> >>> regards, >>> JP >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> _
Re: [ovirt-users] do we need some documentation mainteiners?
what do you guys think on having a pdf/ html in the main upper canvas "guide" button instead of pointing to http://www.ovirt.org/documentation/? so that way everyone has the correct information for the correct version regards, 2017-05-09 14:04 GMT-03:00 Jeff Burns : > > I've actually got the page up in Github and was about to make the quick > edit to "fix it", but the proper administration guide link is already under > "Primary Documentation". Then I noticed the link under "Additional > Documentation" appears to be a catch all for all directories/pages under > the admin guide directory, with the other links under "Additional > Documentation" being similar. (and yes, that isn't working right as Juan > pointed out). It looks like the Additional Documentation links are a > "Works as designed" more or less, but could definitely use some cleaning up > in general? I'd proceed as it looks like a quick cleanup, but I'm > uncertain on design considerations here and the intent of the structure. :) > > Cheers, > Jeff > > On Tue, May 9, 2017 at 10:20 AM, Barak Korren wrote: > >> On 9 May 2017 at 17:13, Juan Pablo wrote: >> > Team, Is just me or the documentation pages are not being updated ? >> many are >> > outdated.. how can we collaborate? >> >> Head over to GitHub: >> https://github.com/oVirt/ovirt-site >> >> > whats up with http://www.ovirt.org/documentation/admin-guide/ ? >> >> Looks like a bad link on the site. There seems to be an administration >> guide here: >> http://www.ovirt.org/documentation/admin-guide/administration-guide/ >> >> -- >> Barak Korren >> RHV DevOps team , RHCE, RHCi >> Red Hat EMEA >> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] do we need some documentation mainteiners?
Team, Is just me or the documentation pages are not being updated ? many are outdated.. how can we collaborate? whats up with http://www.ovirt.org/documentation/admin-guide/ ? regards, JP ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Lost our HostedEngineVM
whats the output of the /var/log/ovirt-hosted-engine-ha/agent.log ? regards 2017-03-21 18:58 GMT-03:00 Matt Emma : > We’re in a bit of a panic mode, so excuse any shortness. > > > > We had a storage failure. We rebooted a VMHost that had the hostedengine > VM - The HostedENgine did not try to move to the other hosts. We’ve since > restored storage and we are able to successfully restart the paused VMs. We > know the HostedEngine’s VM ID is there a way we can force load it from the > mounted storage? > > > > -Matt > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] iscsi data domain when engine is down
Hi, what kind of setup you have? hosted engine just runs on nfs or gluster afaik. regards, 2017-03-10 15:22 GMT-03:00 Devin A. Bougie : > We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down > the entire cluster and just boot the hosts, none of the hosts login to > their iSCSI sessions until the engine comes up. Without logging into the > sessions, sanlock doesn't obtain any leases and obviously none of the VMs > start. > > I'm sure there's something I'm missing, as it looks like it should be > possible to run a hosted engine on a cluster using an iSCSI data domain. > > Any pointers or suggestions would be greatly appreciated. > > Many thanks, > Devin > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Replicated Glusterfs on top of ZFS
cd to inside the pool path then dd if=/dev/zero of=test.tt bs=1M leave it runing 5/10 minutes. do ctrl+c paste result here. etc. 2017-03-03 11:30 GMT-03:00 Arman Khalatyan : > No, I have one pool made of the one disk and ssd as a cache and log device. > I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate > (Arbiter)= replica 2+1! > That how much you can push into compute nodes(they have only 3 disk slots). > > > On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo > wrote: > >> ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should >> have 1 pool, with zlog+cache if you are looking for performance. >> also, dont mix drives. >> whats the performance issue you are facing? >> >> >> regards, >> >> 2017-03-03 11:00 GMT-03:00 Arman Khalatyan : >> >>> This is CentOS 7.3 ZoL version 0.6.5.9-1 >>> >>> [root@clei22 ~]# lsscsi >>> >>> [2:0:0:0]diskATA INTEL SSDSC2CW24 400i /dev/sda >>> >>> [3:0:0:0]diskATA HGST HUS724040AL AA70 /dev/sdb >>> >>> [4:0:0:0]diskATA WDC WD2002FYPS-0 1G01 /dev/sdc >>> >>> >>> [root@clei22 ~]# pvs ;vgs;lvs >>> >>> PV VGFmt >>> Attr PSize PFree >>> >>> /dev/mapper/INTEL_SSDSC2CW240A3_CVCV306302RP240CGN vg_cache lvm2 >>> a-- 223.57g 0 >>> >>> /dev/sdc2 centos_clei22 lvm2 >>> a--1.82t 64.00m >>> >>> VG#PV #LV #SN Attr VSize VFree >>> >>> centos_clei22 1 3 0 wz--n- 1.82t 64.00m >>> >>> vg_cache1 2 0 wz--n- 223.57g 0 >>> >>> LV VGAttr LSize Pool Origin Data% Meta% >>> Move Log Cpy%Sync Convert >>> >>> home centos_clei22 -wi-ao 1.74t >>> >>> >>> root centos_clei22 -wi-ao 50.00g >>> >>> >>> swap centos_clei22 -wi-ao 31.44g >>> >>> >>> lv_cache vg_cache -wi-ao 213.57g >>> >>> >>> lv_slog vg_cache -wi-ao 10.00g >>> >>> >>> [root@clei22 ~]# zpool status -v >>> >>> pool: zclei22 >>> >>> state: ONLINE >>> >>> scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 >>> 2017 >>> >>> config: >>> >>> >>> NAMESTATE READ WRITE CKSUM >>> >>> zclei22 ONLINE 0 0 0 >>> >>> HGST_HUS724040ALA640_PN2334PBJ4SV6T1 ONLINE 0 0 0 >>> >>> logs >>> >>> lv_slog ONLINE 0 0 0 >>> >>> cache >>> >>> lv_cache ONLINE 0 0 0 >>> >>> >>> errors: No known data errors >>> >>> >>> *ZFS config:* >>> >>> [root@clei22 ~]# zfs get all zclei22/01 >>> >>> NAMEPROPERTY VALUE SOURCE >>> >>> zclei22/01 type filesystem - >>> >>> zclei22/01 creation Tue Feb 28 14:06 2017 - >>> >>> zclei22/01 used 389G - >>> >>> zclei22/01 available 3.13T - >>> >>> zclei22/01 referenced389G - >>> >>> zclei22/01 compressratio 1.01x - >>> >>> zclei22/01 mounted yes- >>> >>> zclei22/01 quota none default >>> >>> zclei22/01 reservation none default >>> >>> zclei22/01 recordsize128K local >>> >>> zclei22/01 mountpoint/zclei22/01default >>> >>> zclei22/01 sharenfs offdefault >>> >>> zclei22/01 checksum on default >>> >>> zclei22/01 compression offlocal >>> >>> zclei22/01 atime on default >>> >>> zclei22/01 devices on
Re: [ovirt-users] Replicated Glusterfs on top of ZFS
sedbysnapshots 0 - > > zclei22/01 usedbydataset 389G - > > zclei22/01 usedbychildren0 - > > zclei22/01 usedbyrefreservation 0 - > > zclei22/01 logbias latencydefault > > zclei22/01 dedup offdefault > > zclei22/01 mlslabel none default > > zclei22/01 sync disabled local > > zclei22/01 refcompressratio 1.01x - > > zclei22/01 written 389G - > > zclei22/01 logicalused 396G - > > zclei22/01 logicalreferenced 396G - > > zclei22/01 filesystem_limit none default > > zclei22/01 snapshot_limitnone default > > zclei22/01 filesystem_count none default > > zclei22/01 snapshot_countnone default > > zclei22/01 snapdev hidden default > > zclei22/01 acltype off default > > zclei22/01 context none default > > zclei22/01 fscontext none default > > zclei22/01 defcontextnone default > > zclei22/01 rootcontext none default > > zclei22/01 relatime offdefault > > zclei22/01 redundant_metadataalldefault > > zclei22/01 overlay offdefault > > > > > > On Fri, Mar 3, 2017 at 2:52 PM, Juan Pablo > wrote: > >> Which operating system version are you using for your zfs storage? >> do: >> zfs get all your-pool-name >> use arc_summary.py from freenas git repo if you wish. >> >> >> 2017-03-03 10:33 GMT-03:00 Arman Khalatyan : >> >>> Pool load: >>> [root@clei21 ~]# zpool iostat -v 1 >>>capacity operations >>> bandwidth >>> poolalloc free read write >>> read write >>> -- - - - - >>> - - >>> zclei21 10.1G 3.62T 0112 >>> 823 8.82M >>> HGST_HUS724040ALA640_PN2334PBJ52XWT1 10.1G 3.62T 0 46 >>> 626 4.40M >>> logs- - - - >>> - - >>> lv_slog225M 9.72G 0 66 >>> 198 4.45M >>> cache - - - - >>> - - >>> lv_cache 9.81G 204G 0 46 >>> 56 4.13M >>> -- - - - - >>> - - >>> >>>capacity operations >>> bandwidth >>> poolalloc free read write >>> read write >>> -- - - - - >>> - - >>> zclei21 10.1G 3.62T 0191 >>> 0 12.8M >>> HGST_HUS724040ALA640_PN2334PBJ52XWT1 10.1G 3.62T 0 0 >>> 0 0 >>> logs- - - - >>> - - >>> lv_slog225M 9.72G 0191 >>> 0 12.8M >>> cache - - - - >>> - - >>> lv_cache 9.83G 204G 0218 >>> 0 20.0M >>> -- - - - - >>> - - >>> >>>capacity operations >>> bandwidth >>> poolalloc free read write >>> read write >>> -- - - - - >>> - - >>> zclei21 10.1G 3.62T 0191 >>> 0 12.7M >>> HGST_HUS724040ALA640_PN2334PBJ52XWT1 10.1G 3.62T 0 0 >>> 0 0 >>> logs- - - - >>> - - >>> lv_slog
Re: [ovirt-users] Replicated Glusterfs on top of ZFS
Which operating system version are you using for your zfs storage? do: zfs get all your-pool-name use arc_summary.py from freenas git repo if you wish. 2017-03-03 10:33 GMT-03:00 Arman Khalatyan : > Pool load: > [root@clei21 ~]# zpool iostat -v 1 >capacity operations > bandwidth > poolalloc free read write > read write > -- - - - - > - - > zclei21 10.1G 3.62T 0112 > 823 8.82M > HGST_HUS724040ALA640_PN2334PBJ52XWT1 10.1G 3.62T 0 46 > 626 4.40M > logs- - - - > - - > lv_slog225M 9.72G 0 66 > 198 4.45M > cache - - - - > - - > lv_cache 9.81G 204G 0 46 > 56 4.13M > -- - - - - > - - > >capacity operations > bandwidth > poolalloc free read write > read write > -- - - - - > - - > zclei21 10.1G 3.62T 0191 > 0 12.8M > HGST_HUS724040ALA640_PN2334PBJ52XWT1 10.1G 3.62T 0 0 > 0 0 > logs- - - - > - - > lv_slog225M 9.72G 0191 > 0 12.8M > cache - - - - > - - > lv_cache 9.83G 204G 0218 > 0 20.0M > -- - - - - > - - > >capacity operations > bandwidth > poolalloc free read write > read write > -- - - - - > - - > zclei21 10.1G 3.62T 0191 > 0 12.7M > HGST_HUS724040ALA640_PN2334PBJ52XWT1 10.1G 3.62T 0 0 > 0 0 > logs- - - - > - - > lv_slog225M 9.72G 0191 > 0 12.7M > cache - - - - > - - > lv_cache 9.83G 204G 0 72 > 0 7.68M > -- - - - - > - - > > > On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan wrote: > >> Glusterfs now in healing mode: >> Receiver: >> [root@clei21 ~]# arcstat.py 1 >> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz >> c >> 13:24:49 0 0 0 00 00 00 4.6G >> 31G >> 13:24:50 15480 5180 51 0080 51 4.6G >> 31G >> 13:24:51 17962 3462 34 0062 42 4.6G >> 31G >> 13:24:52 14868 4568 45 0068 45 4.6G >> 31G >> 13:24:53 14064 4564 45 0064 45 4.6G >> 31G >> 13:24:54 12448 3848 38 0048 38 4.6G >> 31G >> 13:24:55 15780 5080 50 0080 50 4.7G >> 31G >> 13:24:56 20268 3368 33 0068 41 4.7G >> 31G >> 13:24:57 12754 4254 42 0054 42 4.7G >> 31G >> 13:24:58 12650 3950 39 0050 39 4.7G >> 31G >> 13:24:59 11640 3440 34 0040 34 4.7G >> 31G >> >> >> Sender >> [root@clei22 ~]# arcstat.py 1 >> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz >> c >> 13:28:37 8 2 25 2 25 00 2 25 468M >> 31G >> 13:28:38 1.2K 727 62 727 62 00 525 54 469M >> 31G >> 13:28:39 815 508 62 508 62 00 376 55 469M >> 31G >> 13:28:40 994 624 62 624 62 00 450 54 469M >> 31G >> 13:28:41 783 456 58 456 58 00 338 50 470M >> 31G >> 13:28:42 916 541 59 541 59 00 390 50 470M >> 31G >> 13:28:43 768 437 56 437 57 00 313 48 471M >> 31G >> 13:28:44 877 534 60 534 60 00 3
Re: [ovirt-users] Replicated Glusterfs on top of ZFS
hey, what are you using for zfs? get an arc status and show please 2017-03-02 9:57 GMT-03:00 Arman Khalatyan : > no, > ZFS itself is not on top of lvm. only ssd was spitted by lvm for slog(10G) > and cache (the rest) > but in any-case the ssd does not help much on glusterfs/ovirt load it has > almost 100% cache misses:( (terrible performance compare with nfs) > > > > > > On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI < > fernando.fredi...@upx.com> wrote: > >> Am I understanding correctly, but you have Gluster on the top of ZFS >> which is on the top of LVM ? If so, why the usage of LVM was necessary ? I >> have ZFS with any need of LVM. >> >> Fernando >> >> On 02/03/2017 06:19, Arman Khalatyan wrote: >> >> Hi, >> I use 3 nodes with zfs and glusterfs. >> Are there any suggestions to optimize it? >> >> host zfs config 4TB-HDD+250GB-SSD: >> [root@clei22 ~]# zpool status >> pool: zclei22 >> state: ONLINE >> scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017 >> config: >> >> NAMESTATE READ WRITE CKSUM >> zclei22 ONLINE 0 0 0 >> HGST_HUS724040ALA640_PN2334PBJ4SV6T1 ONLINE 0 0 0 >> logs >> lv_slog ONLINE 0 0 0 >> cache >> lv_cache ONLINE 0 0 0 >> >> errors: No known data errors >> >> Name: >> GluReplica >> Volume ID: >> ee686dfe-203a-4caa-a691-26353460cc48 >> Volume Type: >> Replicate (Arbiter) >> Replica Count: >> 2 + 1 >> Number of Bricks: >> 3 >> Transport Types: >> TCP, RDMA >> Maximum no of snapshots: >> 256 >> Capacity: >> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free >> >> >> ___ >> Users mailing >> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] How to update ovirt ng nodes
you can check if this is valid: http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ if its not or innacurate, please provide feedback. regards, 2017-02-08 9:35 GMT-03:00 Yedidyah Bar David : > On Wed, Feb 8, 2017 at 2:26 PM, Massimo Mad wrote: > > Hi David, > > I try to use yum update but i have this error : > > yum update > > Loaded plugins: fastestmirror, imgbased-warning > > Warning: yum operations are not persisted across upgrades! > > The problem is that on the server i have this repository: > > ovirt-4.0.repo > > ovirt-4.0-dependencies.repo > > With this repository it's possible upgrade ovirt only only between minor > > releases for example from 4.0.1 to 4.0.6, i want to upgrade the host from > > 4.0 to 4.1. > > So please try adding 4.1 repos (install ovirt-release41) and then update. > -- > Didi > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] "Deactivating Storage Domain" error
Hi Liron, Marry christmas! 1- [root@virt01-int ~]# vdsClient -s 0 getStorageDomainInfo e6a85203-32e0-4200-b3d7-83f140203294 uuid = e6a85203-32e0-4200-b3d7-83f140203294 vguuid = fDclFZ-KLel-t36P-VAFm-hyle-RxcU-uOc7gj state = OK version = 3 role = Master type = ISCSI class = Data pool = ['58405b2d-0094-0380-0260-00a5'] name = nas01 2- which file do you want me to share ? how do I perform task a and b you mention? is there a guide somewhere ? I appreciate your help, best regards, JP 2016-12-25 11:10 GMT-03:00 Liron Aravot : > On Fri, Dec 23, 2016 at 2:32 PM, Juan Pablo > wrote: > > So, what do you guys suggest? isn't there a hint or a way to fix this > issue? > > > > thanks for your help, > > JP > > > > Hi Juan, > Having a "running" task under the tasks tab doesn't mean there's a > task running on vdsm - there are engine "tasks" and vdsm tasks. > > 1. what is the current status of the domain listed in the task? > 2. do you see any print related to the operation in the engine log > file currently? (you can upload it so we'll be able to help you with > verifying that). > > After you verify the operation is indeed not running > You can delete that specific job from the jobs table: > a. select job_id from job where description like '%...%'; > b. delete from job where.. > > Let me know if that worked for you, > Liron. > > > > > 2016-12-20 13:35 GMT-03:00 Juan Pablo : > >> > >> Andrea, thanks for your reply, I already tried that , but > unfortunately, a > >> restart of the engine VM did not solve the error. > >> > >> best regards, > >> JP > >> > >> 2016-12-20 13:32 GMT-03:00 Andrea Ghelardi : > >>> > >>> I remember facing a similar issue during decommissioning of my Ovirt 3 > >>> environment. > >>> > >>> Several tasks showed on GUI (and storage operations unavailable) but no > >>> tasks on command line. > >>> > >>> Eventually restarting the engine VM cleaned up the situation and made > the > >>> storage avail again. > >>> > >>> > >>> > >>> Cheers > >>> > >>> AG > >>> > >>> > >>> > >>> From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On > Behalf > >>> Of Elad Ben Aharon > >>> Sent: Tuesday, December 20, 2016 5:24 PM > >>> To: Juan Pablo > >>> Cc: users > >>> Subject: Re: [ovirt-users] "Deactivating Storage Domain" error > >>> > >>> > >>> > >>> Do you have a storage domain in the DC in status 'locked'? > >>> > >>> > >>> > >>> On Tue, Dec 20, 2016 at 6:09 PM, Juan Pablo > > >>> wrote: > >>> > >>> Elad, I did that and running the commands on the SPM returned : > >>> > >>> [root@virt01-int ~]# vdsClient -s 0 getAllTasks > >>> > >>> [root@virt01-int ~]# vdsClient -s 0 getSpmStatus `vdsClient -s 0 > >>> getConnectedStoragePoolsList` > >>> spmId = 1 > >>> spmStatus = SPM > >>> spmLver = 8 > >>> > >>> [root@virt01-int ~]# vdsClient -s 0 getAllTasks > >>> > >>> [root@virt01-int ~]# > >>> > >>> thanks for your time, > >>> > >>> JP > >>> > >>> > >>> > >>> 2016-12-20 12:55 GMT-03:00 Elad Ben Aharon : > >>> > >>> It's OK, but you need to make sure you're checking it on the host which > >>> has the SPM role. So, first check on the host that it is the current > SPM > >>> with the following: > >>> > >>> vdsClient -s 0 getSpmStatus `vdsClient -s 0 > getConnectedStoragePoolsList` > >>> > >>> Look for the host that has " spmStatus = SPM" > >>> > >>> And after you find the SPM, check for the running tasks as you did > >>> before. > >>> > >>> > >>> > >>> > >>> > >>> On Tue, Dec 20, 2016 at 5:37 PM, Juan Pablo > > >>> wrote: > >>> > >>> Hi, > >>> > >>> thanks for your reply, I did check with vdsClient -s 0 getAllTasks , > >>> returned nothing. is this still the correct way to check? >
Re: [ovirt-users] "Deactivating Storage Domain" error
So, what do you guys suggest? isn't there a hint or a way to fix this issue? thanks for your help, JP 2016-12-20 13:35 GMT-03:00 Juan Pablo : > Andrea, thanks for your reply, I already tried that , but unfortunately, a > restart of the engine VM did not solve the error. > > best regards, > JP > > 2016-12-20 13:32 GMT-03:00 Andrea Ghelardi : > >> I remember facing a similar issue during decommissioning of my Ovirt 3 >> environment. >> >> Several tasks showed on GUI (and storage operations unavailable) but no >> tasks on command line. >> >> Eventually restarting the engine VM cleaned up the situation and made the >> storage avail again. >> >> >> >> Cheers >> >> AG >> >> >> >> *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On >> Behalf Of *Elad Ben Aharon >> *Sent:* Tuesday, December 20, 2016 5:24 PM >> *To:* Juan Pablo >> *Cc:* users >> *Subject:* Re: [ovirt-users] "Deactivating Storage Domain" error >> >> >> >> Do you have a storage domain in the DC in status 'locked'? >> >> >> >> On Tue, Dec 20, 2016 at 6:09 PM, Juan Pablo >> wrote: >> >> Elad, I did that and running the commands on the SPM returned : >> >> [root@virt01-int ~]# vdsClient -s 0 getAllTasks >> >> [root@virt01-int ~]# vdsClient -s 0 getSpmStatus `vdsClient -s 0 >> getConnectedStoragePoolsList` >> spmId = 1 >> spmStatus = SPM >> spmLver = 8 >> >> [root@virt01-int ~]# vdsClient -s 0 getAllTasks >> >> [root@virt01-int ~]# >> >> thanks for your time, >> >> JP >> >> >> >> 2016-12-20 12:55 GMT-03:00 Elad Ben Aharon : >> >> It's OK, but you need to make sure you're checking it on the host which >> has the SPM role. So, first check on the host that it is the current SPM >> with the following: >> >> vdsClient -s 0 getSpmStatus `vdsClient -s 0 getConnectedStoragePoolsList` >> >> Look for the host that has " spmStatus = SPM" >> >> And after you find the SPM, check for the running tasks as you did before. >> >> >> >> >> >> On Tue, Dec 20, 2016 at 5:37 PM, Juan Pablo >> wrote: >> >> Hi, >> >> thanks for your reply, I did check with vdsClient -s 0 getAllTasks , >> returned nothing. is this still the correct way to check? >> >> regards, >> >> JP >> >> >> >> >> >> 2016-12-20 12:32 GMT-03:00 Elad Ben Aharon : >> >> Hi, >> >> >> >> Did you check for running tasks in the SPM host? >> >> >> >> On Tue, Dec 20, 2016 at 5:04 PM, Juan Pablo >> wrote: >> >> Hi, >> >> first of all , thanks to all the Ovirt/Rhev team for the outstanding >> work! >> >> >> we are having a small issue with Ovirt 4.0.5 after testing a full end of >> year infrastructure shutdown, everything came back correctly except that we >> get a 'Deactivating Storage Domain' under the tasks tab. >> >> another dc/cluster running 3.6.7 reported no error with same maintenance >> procedure.. maybe we did something wrong? >> >> would you please be so kind to point me on the right direction to fix it? >> I looked >> >> vdsClient -s 0 getAllTasks , but returns nothing... >> >> thanks for your time guys and merry christmas and happy new year if we >> dont talk again soon! >> >> JP >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> >> >> >> >> >> >> > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] "Deactivating Storage Domain" error
Andrea, thanks for your reply, I already tried that , but unfortunately, a restart of the engine VM did not solve the error. best regards, JP 2016-12-20 13:32 GMT-03:00 Andrea Ghelardi : > I remember facing a similar issue during decommissioning of my Ovirt 3 > environment. > > Several tasks showed on GUI (and storage operations unavailable) but no > tasks on command line. > > Eventually restarting the engine VM cleaned up the situation and made the > storage avail again. > > > > Cheers > > AG > > > > *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On > Behalf Of *Elad Ben Aharon > *Sent:* Tuesday, December 20, 2016 5:24 PM > *To:* Juan Pablo > *Cc:* users > *Subject:* Re: [ovirt-users] "Deactivating Storage Domain" error > > > > Do you have a storage domain in the DC in status 'locked'? > > > > On Tue, Dec 20, 2016 at 6:09 PM, Juan Pablo > wrote: > > Elad, I did that and running the commands on the SPM returned : > > [root@virt01-int ~]# vdsClient -s 0 getAllTasks > > [root@virt01-int ~]# vdsClient -s 0 getSpmStatus `vdsClient -s 0 > getConnectedStoragePoolsList` > spmId = 1 > spmStatus = SPM > spmLver = 8 > > [root@virt01-int ~]# vdsClient -s 0 getAllTasks > > [root@virt01-int ~]# > > thanks for your time, > > JP > > > > 2016-12-20 12:55 GMT-03:00 Elad Ben Aharon : > > It's OK, but you need to make sure you're checking it on the host which > has the SPM role. So, first check on the host that it is the current SPM > with the following: > > vdsClient -s 0 getSpmStatus `vdsClient -s 0 getConnectedStoragePoolsList` > > Look for the host that has " spmStatus = SPM" > > And after you find the SPM, check for the running tasks as you did before. > > > > > > On Tue, Dec 20, 2016 at 5:37 PM, Juan Pablo > wrote: > > Hi, > > thanks for your reply, I did check with vdsClient -s 0 getAllTasks , > returned nothing. is this still the correct way to check? > > regards, > > JP > > > > > > 2016-12-20 12:32 GMT-03:00 Elad Ben Aharon : > > Hi, > > > > Did you check for running tasks in the SPM host? > > > > On Tue, Dec 20, 2016 at 5:04 PM, Juan Pablo > wrote: > > Hi, > > first of all , thanks to all the Ovirt/Rhev team for the outstanding work! > > > we are having a small issue with Ovirt 4.0.5 after testing a full end of > year infrastructure shutdown, everything came back correctly except that we > get a 'Deactivating Storage Domain' under the tasks tab. > > another dc/cluster running 3.6.7 reported no error with same maintenance > procedure.. maybe we did something wrong? > > would you please be so kind to point me on the right direction to fix it? > I looked > > vdsClient -s 0 getAllTasks , but returns nothing... > > thanks for your time guys and merry christmas and happy new year if we > dont talk again soon! > > JP > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] "Deactivating Storage Domain" error
I don't, attaching screenshot: [image: Imágenes integradas 1] also, output from getstoragedomaininfo [root@virt01-int ~]# vdsClient -s 0 getStorageDomainsList e6a85203-32e0-4200-b3d7-83f140203294 6b031086-1b42-4066-a7a0-c481ef16e15e 31910316-4978-4101-ba47-adb8362e3de1 04ceb9ff-88c5-4a91-877b-0a4f10703e3c [root@virt01-int ~]# vdsClient -s 0 getStorageDomainStats e6a85203-32e0-4200-b3d7-83f140203294 disktotal = 3757693730816 (3.0TB) diskfree = 1264330997760 (1.0TB) [root@virt01-int ~]# vdsClient -s 0 getStorageDomainInfo e6a85203-32e0-4200-b3d7-83f140203294 uuid = e6a85203-32e0-4200-b3d7-83f140203294 vguuid = fDclFZ-KLel-t36P-VAFm-hyle-RxcU-uOc7gj state = OK version = 3 role = Master type = ISCSI class = Data pool = ['58405b2d-0094-0380-0260-00a5'] name = nas01 [root@virt01-int ~]# vdsClient -s 0 getStorageDomainInfo 6b031086-1b42-4066-a7a0-c481ef16e15e uuid = 6b031086-1b42-4066-a7a0-c481ef16e15e version = 3 role = Regular remotePath = 10.40.255.153:/mnt/pool00nas01/HEstorage type = NFS class = Data pool = ['58405b2d-0094-0380-0260-00a5'] name = hosted_storage [root@virt01-int ~]# vdsClient -s 0 getStorageDomainInfo 31910316-4978-4101-ba47-adb8362e3de1 uuid = 31910316-4978-4101-ba47-adb8362e3de1 version = 0 role = Regular remotePath = 10.40.255.153:/mnt/pool00nas01/isos type = NFS class = Iso pool = ['58405b2d-0094-0380-0260-00a5'] name = isos [root@virt01-int ~]# vdsClient -s 0 getStorageDomainInfo 04ceb9ff-88c5-4a91-877b-0a4f10703e3c uuid = 04ceb9ff-88c5-4a91-877b-0a4f10703e3c version = 0 role = Regular remotePath = 10.40.255.250:/mnt/pool02nas00/backups type = NFS class = Backup pool = ['58405b2d-0094-0380-0260-00a5'] name = backups thanks again, JP 2016-12-20 13:24 GMT-03:00 Elad Ben Aharon : > Do you have a storage domain in the DC in status 'locked'? > > On Tue, Dec 20, 2016 at 6:09 PM, Juan Pablo > wrote: > >> Elad, I did that and running the commands on the SPM returned : >> >> [root@virt01-int ~]# vdsClient -s 0 getAllTasks >> >> [root@virt01-int ~]# vdsClient -s 0 getSpmStatus `vdsClient -s 0 >> getConnectedStoragePoolsList` >> spmId = 1 >> spmStatus = SPM >> spmLver = 8 >> >> [root@virt01-int ~]# vdsClient -s 0 getAllTasks >> >> [root@virt01-int ~]# >> >> >> thanks for your time, >> JP >> >> 2016-12-20 12:55 GMT-03:00 Elad Ben Aharon : >> >>> It's OK, but you need to make sure you're checking it on the host which >>> has the SPM role. So, first check on the host that it is the current SPM >>> with the following: >>> >>> vdsClient -s 0 getSpmStatus `vdsClient -s 0 getConnectedStoragePoolsList` >>> >>> Look for the host that has " spmStatus = SPM" >>> >>> And after you find the SPM, check for the running tasks as you did >>> before. >>> >>> >>> On Tue, Dec 20, 2016 at 5:37 PM, Juan Pablo >>> wrote: >>> >>>> Hi, >>>> thanks for your reply, I did check with vdsClient -s 0 getAllTasks , >>>> returned nothing. is this still the correct way to check? >>>> >>>> regards, >>>> JP >>>> >>>> >>>> >>>> 2016-12-20 12:32 GMT-03:00 Elad Ben Aharon : >>>> >>>>> Hi, >>>>> >>>>> Did you check for running tasks in the SPM host? >>>>> >>>>> >>>>> >>>>> On Tue, Dec 20, 2016 at 5:04 PM, Juan Pablo >>>> > wrote: >>>>> >>>>>> Hi, >>>>>> first of all , thanks to all the Ovirt/Rhev team for the outstanding >>>>>> work! >>>>>> >>>>>> we are having a small issue with Ovirt 4.0.5 after testing a full end >>>>>> of year infrastructure shutdown, everything came back correctly except >>>>>> that >>>>>> we get a 'Deactivating Storage Domain' under the tasks tab. >>>>>> another dc/cluster running 3.6.7 reported no error with same >>>>>> maintenance procedure.. maybe we did something wrong? >>>>>> >>>>>> would you please be so kind to point me on the right direction to fix >>>>>> it? I looked >>>>>> >>>>>> vdsClient -s 0 getAllTasks , but returns nothing... >>>>>> >>>>>> thanks for your time guys and merry christmas and happy new year if >>>>>> we dont talk again soon! >>>>>> JP >>>>>> >>>>>> ___ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>> >>> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] "Deactivating Storage Domain" error
Elad, I did that and running the commands on the SPM returned : [root@virt01-int ~]# vdsClient -s 0 getAllTasks [root@virt01-int ~]# vdsClient -s 0 getSpmStatus `vdsClient -s 0 getConnectedStoragePoolsList` spmId = 1 spmStatus = SPM spmLver = 8 [root@virt01-int ~]# vdsClient -s 0 getAllTasks [root@virt01-int ~]# thanks for your time, JP 2016-12-20 12:55 GMT-03:00 Elad Ben Aharon : > It's OK, but you need to make sure you're checking it on the host which > has the SPM role. So, first check on the host that it is the current SPM > with the following: > > vdsClient -s 0 getSpmStatus `vdsClient -s 0 getConnectedStoragePoolsList` > > Look for the host that has " spmStatus = SPM" > > And after you find the SPM, check for the running tasks as you did before. > > > On Tue, Dec 20, 2016 at 5:37 PM, Juan Pablo > wrote: > >> Hi, >> thanks for your reply, I did check with vdsClient -s 0 getAllTasks , >> returned nothing. is this still the correct way to check? >> >> regards, >> JP >> >> >> >> 2016-12-20 12:32 GMT-03:00 Elad Ben Aharon : >> >>> Hi, >>> >>> Did you check for running tasks in the SPM host? >>> >>> >>> >>> On Tue, Dec 20, 2016 at 5:04 PM, Juan Pablo >>> wrote: >>> >>>> Hi, >>>> first of all , thanks to all the Ovirt/Rhev team for the outstanding >>>> work! >>>> >>>> we are having a small issue with Ovirt 4.0.5 after testing a full end >>>> of year infrastructure shutdown, everything came back correctly except that >>>> we get a 'Deactivating Storage Domain' under the tasks tab. >>>> another dc/cluster running 3.6.7 reported no error with same >>>> maintenance procedure.. maybe we did something wrong? >>>> >>>> would you please be so kind to point me on the right direction to fix >>>> it? I looked >>>> >>>> vdsClient -s 0 getAllTasks , but returns nothing... >>>> >>>> thanks for your time guys and merry christmas and happy new year if we >>>> dont talk again soon! >>>> JP >>>> >>>> ___ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] "Deactivating Storage Domain" error
Hi, thanks for your reply, I did check with vdsClient -s 0 getAllTasks , returned nothing. is this still the correct way to check? regards, JP 2016-12-20 12:32 GMT-03:00 Elad Ben Aharon : > Hi, > > Did you check for running tasks in the SPM host? > > > > On Tue, Dec 20, 2016 at 5:04 PM, Juan Pablo > wrote: > >> Hi, >> first of all , thanks to all the Ovirt/Rhev team for the outstanding >> work! >> >> we are having a small issue with Ovirt 4.0.5 after testing a full end of >> year infrastructure shutdown, everything came back correctly except that we >> get a 'Deactivating Storage Domain' under the tasks tab. >> another dc/cluster running 3.6.7 reported no error with same maintenance >> procedure.. maybe we did something wrong? >> >> would you please be so kind to point me on the right direction to fix it? >> I looked >> >> vdsClient -s 0 getAllTasks , but returns nothing... >> >> thanks for your time guys and merry christmas and happy new year if we >> dont talk again soon! >> JP >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] "Deactivating Storage Domain" error
Hi, first of all , thanks to all the Ovirt/Rhev team for the outstanding work! we are having a small issue with Ovirt 4.0.5 after testing a full end of year infrastructure shutdown, everything came back correctly except that we get a 'Deactivating Storage Domain' under the tasks tab. another dc/cluster running 3.6.7 reported no error with same maintenance procedure.. maybe we did something wrong? would you please be so kind to point me on the right direction to fix it? I looked vdsClient -s 0 getAllTasks , but returns nothing... thanks for your time guys and merry christmas and happy new year if we dont talk again soon! JP ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Hosted Engine suddenly reboots
thanks a lot for your help! 2016-12-13 12:07 GMT-03:00 Yedidyah Bar David : > On Tue, Dec 13, 2016 at 4:58 PM, Juan Pablo > wrote: > > thanks for pointing me on the right direction , I have this line a > couple of > > minutes before the vm restart > > ":states::128::ovirt_hosted_engine_ha.agent.hosted_engine. > HostedEngine::(score) > > Penalizing score by 1600 due to gateway status" > > so looks like this is causing: > > states::413::ovirt_hosted_engine_ha.agent.hosted_engine. > HostedEngine::(consume) > > Host virt01-int..xx (id 1) score is significantly better than > local > > score, shutting down VM on this host > > is this a network related issue? hosted engine and hosts are on the same > > vlan, does a gateway check should be triggering a hosted engine shutdown? > > Seems so. > > ping to the gateway is an important test, because if it fails it might > mean a split-brain. > When you are asked about a 'gateway address', it's actually used only for > that. > It does not need to be your gateway, but it does need to be a very > reliable thing that should always reply. > > Best, > > > > > > > thanks! > > JP > > > > 2016-12-13 11:37 GMT-03:00 Yedidyah Bar David : > >> > >> On Tue, Dec 13, 2016 at 4:34 PM, Juan Pablo > >> wrote: > >> > Hi guys, > >> > I have ovirt 4.0.5 with 3 hosts and 1 storage setup, using iscsi for > >> > data > >> > and nfs for hosted engine storage. > >> > storage network is on a private vlan. > >> > sometimes I see ETL service stopped / ETL service started in the > events > >> > log, > >> > side by side with a hosted engine stop/start... > >> > also, sometimes I get kicked out of the admin portal with no reason > >> > I had another issue which was related to > >> > https://bugzilla.redhat.com/show_bug.cgi?id=1349829 but looks like > it's > >> > harmless so maybe Im not seeing the problem. > >> > > >> > can you please guide me on finding the issue here? > >> > >> You should start by checking: /var/log/ovirt-hosted-engine- > ha/agent.log. > >> > >> Best, > >> > >> > > >> > best regards, > >> > JP > >> > > >> > ___ > >> > Users mailing list > >> > Users@ovirt.org > >> > http://lists.phx.ovirt.org/mailman/listinfo/users > >> > > >> > >> > >> > >> -- > >> Didi > > > > > > > > -- > Didi > ___ Users mailing list Users@ovirt.org http://lists.phx.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Hosted Engine suddenly reboots
thanks for pointing me on the right direction , I have this line a couple of minutes before the vm restart ":states::128::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to gateway status" so looks like this is causing: states::413::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Host virt01-int..xx (id 1) score is significantly better than local score, shutting down VM on this host is this a network related issue? hosted engine and hosts are on the same vlan, does a gateway check should be triggering a hosted engine shutdown? thanks! JP 2016-12-13 11:37 GMT-03:00 Yedidyah Bar David : > On Tue, Dec 13, 2016 at 4:34 PM, Juan Pablo > wrote: > > Hi guys, > > I have ovirt 4.0.5 with 3 hosts and 1 storage setup, using iscsi for data > > and nfs for hosted engine storage. > > storage network is on a private vlan. > > sometimes I see ETL service stopped / ETL service started in the events > log, > > side by side with a hosted engine stop/start... > > also, sometimes I get kicked out of the admin portal with no reason > > I had another issue which was related to > > https://bugzilla.redhat.com/show_bug.cgi?id=1349829 but looks like it's > > harmless so maybe Im not seeing the problem. > > > > can you please guide me on finding the issue here? > > You should start by checking: /var/log/ovirt-hosted-engine-ha/agent.log. > > Best, > > > > > best regards, > > JP > > > > ___ > > Users mailing list > > Users@ovirt.org > > http://lists.phx.ovirt.org/mailman/listinfo/users > > > > > > -- > Didi > ___ Users mailing list Users@ovirt.org http://lists.phx.ovirt.org/mailman/listinfo/users
[ovirt-users] Hosted Engine suddenly reboots
Hi guys, I have ovirt 4.0.5 with 3 hosts and 1 storage setup, using iscsi for data and nfs for hosted engine storage. storage network is on a private vlan. sometimes I see ETL service stopped / ETL service started in the events log, side by side with a hosted engine stop/start... also, sometimes I get kicked out of the admin portal with no reason I had another issue which was related to https://bugzilla.redhat.com/show_bug.cgi?id=1349829 but looks like it's harmless so maybe Im not seeing the problem. can you please guide me on finding the issue here? best regards, JP ___ Users mailing list Users@ovirt.org http://lists.phx.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Help to setup ovirt release 40
why are you still using centos 6? IIRC centos 6 is not supported anymore as vdsm related problems. you have manual here : http://www.ovirt.org/documentation/quickstart/quickstart-guide/ you can use guidance from http://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/ to have an idea as well.. JP 2016-12-13 10:13 GMT-03:00 TranceWorldLogic . : > Hi, > > I have tried below steps to setup ovirt-engine on centos 6 os. > Please help me to install ovirt engine on my machine. > > Steps: > 1> Install "ovirt relase 40 rpm" > Download from http://plain.resources.ovirt.org/pub/yum-repo/ovirt- > release40.rpm > > 2> executed command "postgresql-setup initdb" > --- output --- > Initializing database ... OK > -- > > 3> executed command "systemctl restart postgresql.service" > > 4> executed command " engine-setup --log=/tmp/engine-setup.log" > --- output --- > # engine-setup --log=/tmp/engine-setup.log > [ INFO ] Stage: Initializing > [ INFO ] Stage: Environment setup > Configuration files: ['/etc/ovirt-engine-setup. > conf.d/10-packaging-jboss.conf', > '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', > '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'] > Log file: /tmp/engine-setup.log > Version: otopi-1.5.2 (otopi-1.5.2-1.el7.centos) > [ ERROR ] Failed to execute stage 'Environment setup': Cannot connect to > Engine database using existing credentials: engine@localhost:5432 > [ INFO ] Stage: Clean up > Log file is located at /tmp/engine-setup.log > [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/ > answers/20161213080658-setup.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Execution of setup failed > -- > > I have attached log file to this mail. > > Thanks, > ~Rohit > > ___ > Users mailing list > Users@ovirt.org > http://lists.phx.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.phx.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] HE working fine, but still strange messages in /var/log/messages
looks like it is... thank you sir! 2016-12-06 13:20 GMT-03:00 Yanir Quinn : > Hi Juan, > > You probably see these repeating messages since ovirt-ha-agent is opening > and closing multiple connections. > > see BZ: > > https://bugzilla.redhat.com/show_bug.cgi?id=1349829 > > This is probably harmless in the overall picture. > > Regards > Yanir Quinn > > On Tue, Dec 6, 2016 at 5:38 PM, Juan Pablo > wrote: > >> Hi, >> first of all, thanks to all the community! >> I have Hosted Engine working with no apparent issues I just have the >> followiing filling my log messages >> : >> ovirt-ha-broker: INFO:ovirt_hosted_engine_ha.br >> oker.listener.ConnectionHandler:Connection closed >> ovirt-ha-broker: INFO:ovirt_hosted_engine_ha.br >> oker.listener.ConnectionHandler:Connection established >> journal: vdsm vds.dispatcher ERROR SSL error during reading data: >> unexpected eof >> ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.li >> b.storage_server.StorageServer:Connecting storage server >> ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.li >> b.storage_server.StorageServer:Refreshing the storage domain >> ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.ag >> ent.hosted_engine.HostedEngine:Preparing images >> >> how can I fix this? >> I deployed the HE with cockpit, then deployed the next hosts with hosted >> engine via UI. >> >> regards, >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] HE working fine, but still strange messages in /var/log/messages
Hi, first of all, thanks to all the community! I have Hosted Engine working with no apparent issues I just have the followiing filling my log messages : ovirt-ha-broker: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed ovirt-ha-broker: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established journal: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting storage server ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Refreshing the storage domain ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Preparing images how can I fix this? I deployed the HE with cockpit, then deployed the next hosts with hosted engine via UI. regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] shutdown and kernel panic
same issue here. I guess now that I read your post that its an HP bug 'somehow' (to blame someone). maybe thats why ovirt asks for fencing interfaces as ilo/imm/ipmi, to hard reboot the server in case there's an issue like this. just my 2c 2016-11-22 13:22 GMT-03:00 Luigi Fanton : > Hello to all, > I'm just playing with ovirt4, installed on HP server with CentOS 7, and a > virtual machine as host engine. > I have a lot of problems with the server shutdown! > The server dosn't power off and will reboot after some "kernel panic" > error. > > To turn off the ovirt server: > 1) Put ovirt in global maintinience (hosted-engine --set-maintenance > --mode=global) > 2) Host engine shutdown and check vm-status > 3) Finaly, shutdown server and wait ... wait ... wait > > Whats wrong?! -_- > > best regards > Luigi F. > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Kill a failed VM import
Chris, which version are you using? I have the same issue running 4.0.4 importing a previously exported vm from export domain. I have tried: vdsClient -s 0 getAllTasksStatuses found the task, did a stopTask and then let the engine clear the db. no luck. the task is not shown anymore with vdsClient but it remains at the engine web interface. I guess its a ddbb error... ideas? 2016-10-26 13:31 GMT-03:00 Chris Cowley : > Hello all > > I have just imported physical machine into my export datastore using > virt-p2v. While I was importing it, my hosted-engine died (VM went into > paused state) during the disk-copy stage. > > Now, that half-imported VM is in the status "image locked" and I am un > able to do anything with it. The disk image does not seem to be in my > master data store so I think I can safely delete the VM and re-import it. > However, that is not available in the web GUI. How can I work around that > and force the deletion? > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] [OT] Gmail is marking the list as spam.
Hi, Someone may take a look at the spam policy of Gmail to see what is not been done as it's been months since I have to get the mails from spam folder no matter how much I try to get them as legitime. Regards ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] [OT] Gmail is marking the list as spam.
Hi, Someone may take a look at the spam policy of Gmail to see what is not been done as it's been months since I have to get the mails from spam folder no matter how much I try to get them as legitime. Regards El 22/10/15 a las 12:24 p.m., users-requ...@ovirt.org escribió: Send Users mailing list submissions to users@ovirt.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.ovirt.org/mailman/listinfo/users or, via email, send a message with subject or body 'help' to users-requ...@ovirt.org You can reach the person managing the list at users-ow...@ovirt.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Users digest..." Today's Topics: 1. [ANN] oVirt 3.6.0 Third Release Candidate is nowavailable for testing (Sandro Bonazzola) 2. Re: Testing self hosted engine in 3.6: hostname not resolved error (Gianluca Cecchi) 3. Re: 3.6 upgrade issue (Yaniv Dary) 4. Re: How to change the hosted engine VM RAM size after deploying (Simone Tiraboschi) -- Message: 1 Date: Thu, 22 Oct 2015 16:08:25 +0200 From: Sandro Bonazzola To: annou...@ovirt.org, users , devel Subject: [ovirt-users] [ANN] oVirt 3.6.0 Third Release Candidate is now available for testing Message-ID: Content-Type: text/plain; charset="utf-8" The oVirt Project is pleased to announce the availability of the Third Release Candidate of oVirt 3.6 for testing, as of October 22nd, 2015. This release is available now for Fedora 22, Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar). This release supports Hypervisor Hosts running Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar), Fedora 21 and Fedora 22. Highly experimental support for Debian 8.1 Jessie has been added too. This release of oVirt 3.6.0 includes numerous bug fixes. See the release notes [1] for an initial list of the new features and bugs fixed. Please refer to release notes [1] for Installation / Upgrade instructions. New oVirt Node ISO and oVirt Live ISO will be available soon as well[2]. Please note that mirrors[3] may need usually one day before being synchronized. Please refer to the release notes for known issues in this release. [1] http://www.ovirt.org/OVirt_3.6_Release_Notes [2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/ [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [RFI] oVirt 3.6 Planning
Thank you Itamar, I’ll look forward to start testing 3.6. Regards, > El 4 nov. 2015, a las 9:06 p.m., Itamar Heim escribió: > > On 09/15/2014 08:21 AM, Juan Pablo Lorier wrote: >> +1 to iso upload from gui > > was worked on, didn't make 3.6 > >> +1 to ceph support (if the way is via Cynder, then integrate Cynder in >> Ovirt as you did with neutron to get arround the lack of features in >> networking) > > in 3.6 > >> >> I've been asking for several things (with their respective RFEs) and as >> versions go by without success, I'm asking again: >> >> - 1049994 [RFE] Allow choosing network interface for gluster domain >> traffic > > in 3.6 > >> - 1049476 [RFE] Mix untagged and tagged Logical Networks on the same NIC > > in 3.6 > >> - 1029489 [RFE] Export not exporting direct lun disk > > not currently planned to happen (closed) > >> - 1051002 [RFE] ISO domain should be a simple NFS share containing ISOs >> > > still open. > >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] vlan-tagging on non-tagged network
Hi, My two cents is that shouldn’t be doing mac spoofing protection as a default. There are several use cases where you may use virtual nics defined withing the guest and this feature is going to create problems to users that may not know that there’s a mac spoofing protection withing ovirt. Think of keeaplived vmac option, openvpn and any tap adapter you need to create. If you need to protect against spoofing attacks, you should use the hook or more powerful tools and in any case, you must be aware that you used this kind of protection. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] separate ovirtmgmt from glusterfs traffic
Hi, In my experience, having ovirt traffic on the same nic that gluster can make your platafrom unstable. I was using it for large file storage and gluster has so big traffic that ovirt got confused and started marking hosts as unavailable because of hi latency. I've opened an RFE over a year ago, but had no luck with the team to get it done. In the RFE I was asking to have a way in the UI to decide which nic to use for gluster other than the MGMT net that is the one ovirt lets you use. There's another way to do this and it's from outside ovirt. There you have to unregister and re register the bricks using gluster console commands. This way, when you register the bricks, you can specify the IP address of the spare NIC and then the traffic will not interfere with the mgmt. There's a step that I don't recall much, but ovirt is going to need to know that the bricks are no longer is the mgmt IP, maybe someone else in the list can help with this. I can tell you that if you search the list you'll see my posts about this and the replys of those who helped my back then. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] hosted engine on iSCSI deep dive
Hi, Great, let's wait then a bit :-) It's a great feature you're working on. Hope you can manage to get multipath customizations as a part of the install process. Regards, El 13/11/2014 6:23, Jiri Moskovcak escribió: On 11/12/2014 06:06 PM, jplor...@gmail.com wrote: Hi, First thanks for the video. Though I have a question about the setup. I don't see any options regarding multipath. You are connecting directly to the iscsi lun, but most of the times the connection is redundant and multipathd needs to be configured in order to have redundancy. Is it a future enhancement or multipath is done behind scene? If it's taking care of it, what about special configs for the device in the multipath.conf? Regards Hi, unfortunately the current setup doesn't support multipath. It's planned as a future enhancement. --Jirka ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt network VLAN tagging
Hi, It seems to me that it may be a switch configuration problem. The vms in the same host can talk to each other via the bridge, but if your switch is not configured to allow tagged traffic for vlans 50 and 60 (and all the way to the interface of the other host must allow tagged traffic for those vlans), you won't be able to get traffic. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [RFI] oVirt 3.6 Planning
+1 to iso upload from gui +1 to ceph support (if the way is via Cynder, then integrate Cynder in Ovirt as you did with neutron to get arround the lack of features in networking) I've been asking for several things (with their respective RFEs) and as versions go by without success, I'm asking again: - 1049994 [RFE] Allow choosing network interface for gluster domain traffic - 1049476 [RFE] Mix untagged and tagged Logical Networks on the same NIC - 1029489 [RFE] Export not exporting direct lun disk - 1051002 [RFE] ISO domain should be a simple NFS share containing ISOs ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt + opennode or oVirt+proxmox
Hi Robson, I don't know openode, but I can tell you that I can't think of a way of using proxmox along ovirt. Both are virtualization platforms based on KVM (so you can run vms from one platafrom into the other without any conversion, just export-improt,copy or whatever you need to go from one to the other). They have both their upsides and downsides, basically IMHO, I like proxmox decentralized way of managing the platform, but I love more the great community around ovirt and it's fast evolution. One thing is certain, your virtualization host can't be both, ovirts and proxmox virtualization hosts. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt + opennode or oVirt+proxmox
Hi Robson, I don't know openode, but I can tell you that I can't think of a way of using proxmox along ovirt. Both are virtualization platforms based on KVM (so you can run vms from one platafrom into the other without any conversion, just export-improt,copy or whatever you need to go from one to the other). They have both their upsides and downsides, basically IMHO, I like proxmox decentralized way of managing the platform, but I love more the great community around ovirt and it's fast evolution. One thing is certain, your virtualization host can't be both, ovirts and proxmox virtualization hosts. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.5 planning
El 02/03/14 18:59, Itamar Heim escribió: > On 02/28/2014 06:46 PM, Juan Pablo Lorier wrote: >> Hi, >> >> I'm kind of out of date at this time, but I'd like to propose something >> that was meant for 3.4 and I don't know it it made into it; use any nfs >> share as either a iso or export so you can just copy into the share and >> then update in some way the db. > > not yet in. > >> Also, make export domain able to be shared among dcs as is iso domain, >> that is an rfe from long time ago and a useful one. > > true. some relief via a glance storage domain allowing that. I know, but too much overhead in using glance. > >> Attaching and dettaching domains is both time consuming and boring. >> Also using tagged and untagged networks on top of the same nic. >> Everybody does that except for ovirt. >> I also like to say that tough I have a huge enthusiasm for ovirt's fast >> evolution, I think that you may need to slow down with adding new >> features until most of the rfes that have over a year are done, because >> other way it's kind of disappointing opening a rfe just to see it >> sleeping too much time. Don't take this wrong, I've been listened and >> helped by the team everytime I needed and I'm thankful for that. > > "age" is not always the best criteria for prioritiy. we have RFE's > open for several years it takes us time to get to. sometime newer RFEs > are more important/interesting (say, leveraging cloud-init or > neutron), sometime old RFEs are hard to close (get rid of storage > pool, etc.). > its a balancing act. but its also open source, which means anyone can > try and contribute for things they would like prioritized. > I'm aware of the open source nature of the project, my way of contributing is testing, reporting bugs and propose rfes, that much I can do. I understand your point, but I think that some rfes are as hard as useful so maybe there's some room for rebalancing the effort to try and close those ones. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.5 planning
Hi, I'm kind of out of date at this time, but I'd like to propose something that was meant for 3.4 and I don't know it it made into it; use any nfs share as either a iso or export so you can just copy into the share and then update in some way the db. Also, make export domain able to be shared among dcs as is iso domain, that is an rfe from long time ago and a useful one. Attaching and dettaching domains is both time consuming and boring. Also using tagged and untagged networks on top of the same nic. Everybody does that except for ovirt. I also like to say that tough I have a huge enthusiasm for ovirt's fast evolution, I think that you may need to slow down with adding new features until most of the rfes that have over a year are done, because other way it's kind of disappointing opening a rfe just to see it sleeping too much time. Don't take this wrong, I've been listened and helped by the team everytime I needed and I'm thankful for that. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine-backup --restore
I'll do then next week. Regards and again, thanks to everyone. On 13/02/14 13:50, Trey Dockendorf wrote: > A wiki page is a good idea, glad the steps worked in more than my case. > > - Trey > > On Thu, Feb 13, 2014 at 9:37 AM, Juan Pablo Lorier wrote: >> Hi Trey, >> >> Following your procedure, I was able to get the engine running. THANKS A >> LOT TO EVERYONE!! >> If you allow me, I'll create a wiki page with this mentioning you so >> others can get this easily. >> Regards, >> >> On 12/02/14 17:32, Trey Dockendorf wrote: >>> I was having the same issue (and posted about it today, with full >>> steps). I'd reply to my current post to list but I don't seem to >>> receive my own posts. >>> >>> I have since taken these steps (as root) >>> >>> $ su - postgres -c "dropdb engine" >>> $ su - postgres -c "psql -c \"create user engine password ''\"" >>> $ su - postgres -c "psql -c \"create database engine owner engine >>> template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype >>> 'en_US.UTF-8'\"" >>> $ engine-backup --mode=restore --scope=all >>> --file=engine-20140211-1457.tar.bz2 --log=engine-backup.log >>> --change-db-credentials --db-host=localhost --db-port=5432 >>> --db-user=engine --db-name=engine --db-password= >>> Restoring... >>> Rewriting /etc/ovirt-engine/engine.conf.d/10-setup-database.conf >>> Note: you might need to manually fix: >>> - iptables/firewalld configuration >>> - autostart of ovirt-engine service >>> You can now start the engine service and then restart httpd >>> Done. >>> >>> $ engine-setup >>> >>> I believe I initially created the database incorrectly (ran 'createdb >>> engine' as postgres user). I was getting errors during engine-setup >>> that indicated the database could not be accessed. >>> >>> After the steps above, everything looks good. >>> >>> - Trey >>> >>> >>> On Wed, Feb 12, 2014 at 1:10 PM, Juan Pablo Lorier >>> wrote: >>>> Well, too soon to say boodbye. >>>> Thought I used --change-db-credentials in the restore, the engine seems >>>> to be unable to connect to the database. I assume that it didn't get the >>>> new password, so, is there a way to tell the engine about the new password? >>>> Regards, >>>> >>>> On 12/02/14 16:03, Yedidyah Bar David wrote: >>>>> - Original Message - >>>>>> From: "Juan Pablo Lorier" >>>>>> To: "Yedidyah Bar David" >>>>>> Cc: "Sahina Bose" , "users" >>>>>> Sent: Wednesday, February 12, 2014 7:55:35 PM >>>>>> Subject: Re: [Users] Problems accesing the database >>>>>> >>>>>> Hi Yedidyah, >>>>>> >>>>>> But If I run engine-setup and then engine-backup restore shuldn't it >>>>>> import the data to the existing db created by engine-setup? >>>>>> That's shown everywhere so I thought it's a valid way to migrate >>>>> No. >>>>> >>>>> There is a specific case in which this works automatically: >>>>> All on the same host: >>>>> 1. engine-setup >>>>> 2. engine-backup --mode=backup >>>>> (perhaps do other stuff here) >>>>> 3. engine-cleanup >>>>> 4. engine-backup --mode=restore >>>>> >>>>> Why does this work? Because 'engine-cleanup', since 3.3, does not drop >>>>> the database nor user inside postgres. So when restore tries to access >>>>> this database using this user and password it succeeds. >>>>> >>>>> In general, if you do the restore on another machine, and do there >>>>> 'engine-setup; engine-cleanup' as a quick-postgres-provisioning-tool, >>>>> you end up almost ready, but not quite - because the password is random, >>>>> and therefore different between the installations. In principle you could >>>>> have provided just the password to restore, but we decided that if you >>>>> need to change the credentials, you should pass all of them (except for >>>>> defaults). >>>>> >>>>> Hope this clarifies, >>>> >>>> ___ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >> signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] snapshots not shown in webadmin on 3.3.3
Hi Jiri, You are right, chrome shows the snapshots. I'll change the bz to reflect this, you may mark it as a duplicate then. Regards, On 13/02/14 06:27, Jiri Belka wrote: > On Wed, 12 Feb 2014 16:41:06 -0500 (EST) > Eli Mesika wrote: > >> >> - Original Message - >>> From: "Juan Pablo Lorier" >>> To: "users" >>> Sent: Wednesday, February 12, 2014 6:37:28 PM >>> Subject: [Users] snapshots not shown in webadmin on 3.3.3 >>> >>> Hi, >>> >>> I've updated to 3.3.3 last week and now I find that the snapshots are >>> not shown in webadmin. I've taken some new ones to see if it was due to >>> the migration, but though they finish correctly, they are not shown in >>> the webadmin tab. >>> Is there a way to list the snapshots other than the webadmin? >> Juan >> I had reproduced that in 3.4 beta 2 , can you please open a bug on that ? >> Thanks >> >> Eli > If this is just Web UI issue than it is known for long time. But there's > no clear reproducer (BZ was filled for RHEVM 3.3). This has something > to do with Firefox cache. With clear FF profile snapshots are visible. > But after some time when are not visible until full reload of the page. > > j. signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] snapshots not shown in webadmin on 3.3.3
It's done: *Bug 1064946* <https://bugzilla.redhat.com/show_bug.cgi?id=1064946> -Snapshots are not shown Regards, On 12/02/14 19:41, Eli Mesika wrote: > > - Original Message ----- >> From: "Juan Pablo Lorier" >> To: "users" >> Sent: Wednesday, February 12, 2014 6:37:28 PM >> Subject: [Users] snapshots not shown in webadmin on 3.3.3 >> >> Hi, >> >> I've updated to 3.3.3 last week and now I find that the snapshots are >> not shown in webadmin. I've taken some new ones to see if it was due to >> the migration, but though they finish correctly, they are not shown in >> the webadmin tab. >> Is there a way to list the snapshots other than the webadmin? > Juan > I had reproduced that in 3.4 beta 2 , can you please open a bug on that ? > Thanks > > Eli > > >> Regards, >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine-backup --restore
Hi Trey, Following your procedure, I was able to get the engine running. THANKS A LOT TO EVERYONE!! If you allow me, I'll create a wiki page with this mentioning you so others can get this easily. Regards, On 12/02/14 17:32, Trey Dockendorf wrote: > I was having the same issue (and posted about it today, with full > steps). I'd reply to my current post to list but I don't seem to > receive my own posts. > > I have since taken these steps (as root) > > $ su - postgres -c "dropdb engine" > $ su - postgres -c "psql -c \"create user engine password ''\"" > $ su - postgres -c "psql -c \"create database engine owner engine > template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype > 'en_US.UTF-8'\"" > $ engine-backup --mode=restore --scope=all > --file=engine-20140211-1457.tar.bz2 --log=engine-backup.log > --change-db-credentials --db-host=localhost --db-port=5432 > --db-user=engine --db-name=engine --db-password= > Restoring... > Rewriting /etc/ovirt-engine/engine.conf.d/10-setup-database.conf > Note: you might need to manually fix: > - iptables/firewalld configuration > - autostart of ovirt-engine service > You can now start the engine service and then restart httpd > Done. > > $ engine-setup > > I believe I initially created the database incorrectly (ran 'createdb > engine' as postgres user). I was getting errors during engine-setup > that indicated the database could not be accessed. > > After the steps above, everything looks good. > > - Trey > > > On Wed, Feb 12, 2014 at 1:10 PM, Juan Pablo Lorier wrote: >> Well, too soon to say boodbye. >> Thought I used --change-db-credentials in the restore, the engine seems >> to be unable to connect to the database. I assume that it didn't get the >> new password, so, is there a way to tell the engine about the new password? >> Regards, >> >> On 12/02/14 16:03, Yedidyah Bar David wrote: >>> - Original Message - >>>> From: "Juan Pablo Lorier" >>>> To: "Yedidyah Bar David" >>>> Cc: "Sahina Bose" , "users" >>>> Sent: Wednesday, February 12, 2014 7:55:35 PM >>>> Subject: Re: [Users] Problems accesing the database >>>> >>>> Hi Yedidyah, >>>> >>>> But If I run engine-setup and then engine-backup restore shuldn't it >>>> import the data to the existing db created by engine-setup? >>>> That's shown everywhere so I thought it's a valid way to migrate >>> No. >>> >>> There is a specific case in which this works automatically: >>> All on the same host: >>> 1. engine-setup >>> 2. engine-backup --mode=backup >>> (perhaps do other stuff here) >>> 3. engine-cleanup >>> 4. engine-backup --mode=restore >>> >>> Why does this work? Because 'engine-cleanup', since 3.3, does not drop >>> the database nor user inside postgres. So when restore tries to access >>> this database using this user and password it succeeds. >>> >>> In general, if you do the restore on another machine, and do there >>> 'engine-setup; engine-cleanup' as a quick-postgres-provisioning-tool, >>> you end up almost ready, but not quite - because the password is random, >>> and therefore different between the installations. In principle you could >>> have provided just the password to restore, but we decided that if you >>> need to change the credentials, you should pass all of them (except for >>> defaults). >>> >>> Hope this clarifies, >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine-backup --restore
Hi David, It ended correctly and password was changed in the 10-setup-database.conf but there were still errors in the engine log pointing to an authentication problem against the database. I'm starting over again with some hints from another post, lets see what happens. Regards, On 13/02/14 04:23, Yedidyah Bar David wrote: > - Original Message - >> From: "Alon Bar-Lev" >> To: "Juan Pablo Lorier" >> Cc: "Yedidyah Bar David" , "users" >> Sent: Wednesday, February 12, 2014 9:13:30 PM >> Subject: Re: [Users] engine-backup --restore >> >> >> >> - Original Message - >>> From: "Juan Pablo Lorier" >>> To: "Yedidyah Bar David" >>> Cc: "users" >>> Sent: Wednesday, February 12, 2014 9:10:43 PM >>> Subject: Re: [Users] engine-backup --restore >>> >>> Well, too soon to say boodbye. >>> Thought I used --change-db-credentials in the restore, the engine seems >>> to be unable to connect to the database. I assume that it didn't get the >>> new password, so, is there a way to tell the engine about the new password? >>> Regards, >> /etc/ovirt-engine/engine.conf/10-setup-database.conf or similar. >> Look for ENGINE_DB_* variables. > Indeed, but this should happen automatically if you passed correct > credentials and restore succeeded. See e.g. the other report posted. > Did restore succeed? > > Thanks, signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine-backup --restore
Thanks Alon, you were right. On 12/02/14 17:13, Alon Bar-Lev wrote: > > - Original Message - >> From: "Juan Pablo Lorier" >> To: "Yedidyah Bar David" >> Cc: "users" >> Sent: Wednesday, February 12, 2014 9:10:43 PM >> Subject: Re: [Users] engine-backup --restore >> >> Well, too soon to say boodbye. >> Thought I used --change-db-credentials in the restore, the engine seems >> to be unable to connect to the database. I assume that it didn't get the >> new password, so, is there a way to tell the engine about the new password? >> Regards, > /etc/ovirt-engine/engine.conf/10-setup-database.conf or similar. > Look for ENGINE_DB_* variables. > >> On 12/02/14 16:03, Yedidyah Bar David wrote: >>> - Original Message - >>>> From: "Juan Pablo Lorier" >>>> To: "Yedidyah Bar David" >>>> Cc: "Sahina Bose" , "users" >>>> Sent: Wednesday, February 12, 2014 7:55:35 PM >>>> Subject: Re: [Users] Problems accesing the database >>>> >>>> Hi Yedidyah, >>>> >>>> But If I run engine-setup and then engine-backup restore shuldn't it >>>> import the data to the existing db created by engine-setup? >>>> That's shown everywhere so I thought it's a valid way to migrate >>> No. >>> >>> There is a specific case in which this works automatically: >>> All on the same host: >>> 1. engine-setup >>> 2. engine-backup --mode=backup >>> (perhaps do other stuff here) >>> 3. engine-cleanup >>> 4. engine-backup --mode=restore >>> >>> Why does this work? Because 'engine-cleanup', since 3.3, does not drop >>> the database nor user inside postgres. So when restore tries to access >>> this database using this user and password it succeeds. >>> >>> In general, if you do the restore on another machine, and do there >>> 'engine-setup; engine-cleanup' as a quick-postgres-provisioning-tool, >>> you end up almost ready, but not quite - because the password is random, >>> and therefore different between the installations. In principle you could >>> have provided just the password to restore, but we decided that if you >>> need to change the credentials, you should pass all of them (except for >>> defaults). >>> >>> Hope this clarifies, >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine-backup --restore
Well, too soon to say boodbye. Thought I used --change-db-credentials in the restore, the engine seems to be unable to connect to the database. I assume that it didn't get the new password, so, is there a way to tell the engine about the new password? Regards, On 12/02/14 16:03, Yedidyah Bar David wrote: > - Original Message - >> From: "Juan Pablo Lorier" >> To: "Yedidyah Bar David" >> Cc: "Sahina Bose" , "users" >> Sent: Wednesday, February 12, 2014 7:55:35 PM >> Subject: Re: [Users] Problems accesing the database >> >> Hi Yedidyah, >> >> But If I run engine-setup and then engine-backup restore shuldn't it >> import the data to the existing db created by engine-setup? >> That's shown everywhere so I thought it's a valid way to migrate > No. > > There is a specific case in which this works automatically: > All on the same host: > 1. engine-setup > 2. engine-backup --mode=backup > (perhaps do other stuff here) > 3. engine-cleanup > 4. engine-backup --mode=restore > > Why does this work? Because 'engine-cleanup', since 3.3, does not drop > the database nor user inside postgres. So when restore tries to access > this database using this user and password it succeeds. > > In general, if you do the restore on another machine, and do there > 'engine-setup; engine-cleanup' as a quick-postgres-provisioning-tool, > you end up almost ready, but not quite - because the password is random, > and therefore different between the installations. In principle you could > have provided just the password to restore, but we decided that if you > need to change the credentials, you should pass all of them (except for > defaults). > > Hope this clarifies, signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine-backup --restore
OK, that did work. I droped the database and created it back and also changed the password for the engine user. That did the trick. I now have to figure out why the web is blank, but at least is a step forward. Regards, On 12/02/14 16:03, Yedidyah Bar David wrote: > - Original Message - >> From: "Juan Pablo Lorier" >> To: "Yedidyah Bar David" >> Cc: "Sahina Bose" , "users" >> Sent: Wednesday, February 12, 2014 7:55:35 PM >> Subject: Re: [Users] Problems accesing the database >> >> Hi Yedidyah, >> >> But If I run engine-setup and then engine-backup restore shuldn't it >> import the data to the existing db created by engine-setup? >> That's shown everywhere so I thought it's a valid way to migrate > No. > > There is a specific case in which this works automatically: > All on the same host: > 1. engine-setup > 2. engine-backup --mode=backup > (perhaps do other stuff here) > 3. engine-cleanup > 4. engine-backup --mode=restore > > Why does this work? Because 'engine-cleanup', since 3.3, does not drop > the database nor user inside postgres. So when restore tries to access > this database using this user and password it succeeds. > > In general, if you do the restore on another machine, and do there > 'engine-setup; engine-cleanup' as a quick-postgres-provisioning-tool, > you end up almost ready, but not quite - because the password is random, > and therefore different between the installations. In principle you could > have provided just the password to restore, but we decided that if you > need to change the credentials, you should pass all of them (except for > defaults). > > Hope this clarifies, signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine-backup --restore
Almost, when you mean random password, you mean for the engine user in the database? The new install and the old differ in some things in the db, so I tried to made then the same, but I saw a post of a user that removed the engine database and user and created them again before trying to restore, is that a good path? Regards, On 12/02/14 16:03, Yedidyah Bar David wrote: > - Original Message - >> From: "Juan Pablo Lorier" >> To: "Yedidyah Bar David" >> Cc: "Sahina Bose" , "users" >> Sent: Wednesday, February 12, 2014 7:55:35 PM >> Subject: Re: [Users] Problems accesing the database >> >> Hi Yedidyah, >> >> But If I run engine-setup and then engine-backup restore shuldn't it >> import the data to the existing db created by engine-setup? >> That's shown everywhere so I thought it's a valid way to migrate > No. > > There is a specific case in which this works automatically: > All on the same host: > 1. engine-setup > 2. engine-backup --mode=backup > (perhaps do other stuff here) > 3. engine-cleanup > 4. engine-backup --mode=restore > > Why does this work? Because 'engine-cleanup', since 3.3, does not drop > the database nor user inside postgres. So when restore tries to access > this database using this user and password it succeeds. > > In general, if you do the restore on another machine, and do there > 'engine-setup; engine-cleanup' as a quick-postgres-provisioning-tool, > you end up almost ready, but not quite - because the password is random, > and therefore different between the installations. In principle you could > have provided just the password to restore, but we decided that if you > need to change the credentials, you should pass all of them (except for > defaults). > > Hope this clarifies, signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Problems accesing the database
Hi Yedidyah, But If I run engine-setup and then engine-backup restore shuldn't it import the data to the existing db created by engine-setup? That's shown everywhere so I thought it's a valid way to migrate Regards, On 12/02/14 15:44, Yedidyah Bar David wrote: > - Original Message ----- >> From: "Juan Pablo Lorier" >> To: "Sahina Bose" >> Cc: "users" >> Sent: Wednesday, February 12, 2014 7:01:20 PM >> Subject: [Users] Problems accesing the database >> >> Hi Sahina, >> >> Sorry to ping you directly, but you were of great help last time. I'm >> also copying the list. >> In the process of recreating the DC I've decided to migrate the engine >> from a virtualbox vm to a bare metal host and I can't get engine-backup >> to restore the engine. >> Both are Centos 6.5 with engine 3.3.3, but in the new box I can't log to >> the database with psql -U engine -W >> The error is: >> >> [root@centos-ovirt ~]# engine-backup --mode=restore >> --file=/root/backup_20140212.bkp --log=/root/restore.log >> Restoring... >> FATAL: Can't connect to the database >> >> I've added to pg_hba.conf the lines I found in the web: >> >> hostallallmd5 >> >> and tryied with the conf from the one that works but nothing. >> Any clues? > Unlike engine-setup, engine-backup does nothing for you regarding > creating the database - it's basically the same as creating a database > to be used for a remote-db setup. See e.g. [1] for details, also > 'engine-backup --help'. > > [1] http://www.ovirt.org/OVirt_Engine_Development_Environment#Database signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Problems accesing the database
Hi Sahina, Sorry to ping you directly, but you were of great help last time. I'm also copying the list. In the process of recreating the DC I've decided to migrate the engine from a virtualbox vm to a bare metal host and I can't get engine-backup to restore the engine. Both are Centos 6.5 with engine 3.3.3, but in the new box I can't log to the database with psql -U engine -W The error is: [root@centos-ovirt ~]# engine-backup --mode=restore --file=/root/backup_20140212.bkp --log=/root/restore.log Restoring... FATAL: Can't connect to the database I've added to pg_hba.conf the lines I found in the web: hostallallmd5 and tryied with the conf from the one that works but nothing. Any clues? Regards, signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] snapshots not shown in webadmin on 3.3.3
Hi, I've updated to 3.3.3 last week and now I find that the snapshots are not shown in webadmin. I've taken some new ones to see if it was due to the migration, but though they finish correctly, they are not shown in the webadmin tab. Is there a way to list the snapshots other than the webadmin? Regards, signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Dedicated Bonding interface for Gluster
Hi Mario, I've created a little guide in the wiki for this porpoise, I haven't updated it with I tip I was told later on using the hosts file (or dns) instead of fixed ips, so you may search on that matter to optimize the solution. The wiki is http://www.ovirt.org/oVirt_Wiki:How_to_change_Gluster%27s_network_interface Hope it helps Regards, signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Ovirt Gluster problems
Hi Sahina, it's been some days since I got this to work thanks to your help. I still don't have the time to seek trhough the logs to find when the engine left things without erasing after the force deletion of the DC, but if you wish, I can send you the raw logs and you may search in them. Tell me if it's ok with you and I'll send you a google drive link. Regards, On 29/01/14 07:28, Sahina Bose wrote: > > On 01/29/2014 04:15 AM, Steve Dainard wrote: >> Not sure if this is exactly your issue, but this post >> here: http://comments.gmane.org/gmane.comp.emulators.ovirt.user/12200 >> might lead you in the right direction. >> >> "one note - if you back it up while its attached to an engine, you will >> need to edit its meta data file to remove the association to allow the >> other engine to connect it to the new pool for restore." >> > > Did this solve your issue? > > If not, could you let us know the error messages from the logs? > (engine.log) > If you're looking to remove a host from a gluster cluster when there > are no online hosts, checking the Force option should do this for you. > >> *Steve Dainard * >> >> >> >> On Tue, Jan 28, 2014 at 12:41 PM, Juan Pablo Lorier >> mailto:jplor...@gmail.com>> wrote: >> >> Hi, >> >> I had some issues with a gluster cluster and after some time >> trying to >> get the storage domain up or delete it (I opened a BZ about a >> deadlock >> in the process of removing the domain) I gave up and destroyed >> the DC. >> The thing is that I want to add the hosts that where part of the >> DC and >> now I get that I can't as they have the volume. I try to stop the >> volume >> but I can't as no host is running in the deleted cluster and for some >> reason, ovirt needs that. >> I can't delete the hosts either as they have the volume... so >> I'm back >> in another chicken and egg problem. >> Any hints?? >> >> PD: I can't nuke the hole ovirt plataform as I have another DC in >> production otherwise I would :-) >> >> Regards, >> >> >> ___ >> Users mailing list >> Users@ovirt.org <mailto:Users@ovirt.org> >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] My first wiki page
Thank you, I'll update the wiky in a few days to include this info (and I'll use it myself). Regards, El 05/02/14 04:43, Sahina Bose escribió: On 02/03/2014 07:18 PM, Juan Pablo Lorier wrote: Hi, I've created my first wiki page and I'd like someone to review it and tell me if there's something that need to be changed (besides it does not have any style yet) The URL is http://www.ovirt.org/oVirt_Wiki:How_to_change_Gluster%27s_network_interface Regards, Firstly, thanks for putting this information up! Some comments - 1. when you use different IP addresses for engine -to -gluster host (say IP1) and gluster -to -gluster communication (say IP2), operations from ovirt engine like add brick or remove brick would fail (as brick is tried to be added with IP1 which gluster does not understand) To work around this, it is better to use a FQDN both for registering the host with engine and also to peer probe the host from gluster CLI. You could have multiple IP addresses on the host resolve to the same FQDN. 2. To reuse a brick directory, gluster provides the force option during volume creation as well as adding bricks. This is available from gluster 3.5 upwards. [Adding Vijay to correct me, if I'm wrong] ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users