Re: [ovirt-users] Storage domain not in pool issue
On 03/04/15 08:12 +, VONDRA Alain wrote: Hi, Do you have any news concerning my problem ? I'm sorry that I don't have any new recommendations beyond my previous comments. I'm hoping Federico or Maor will have some expertise to lend to the problem. At this stage, your best bet might be to: 1. Stop engine 2. clear the old storage pool from the storage domain metadata 3. Restart engine Unfortunately, I am not sure what the best/safest way to accomplish #2 is and am hoping that someone who has been through this process can assist with it. -- Adam Litke ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] benefit to spice-xpi plugin over native spice and remote-viewer
Hi. I'm wondering if someone can explain the benefit to using the spice-xpi plugin to access a console versus using remote-viewer natively from the browser, specifically on a RHEL system? In particular, if you have the virt-viewer package installed, and not spice-xpi, and you go to visit a console, then the client downloads the configuration file, /tmp/console.vv, and calls remote-viewer /tmp/console.vv. On the other hand, if you have spice-xpi installed, it seems that the client doesn't need to download the configuration file first, but calls remote-viewer --spice-controller. In both cases, it appears to me that the result is the same. However, using spice-xpi, there seems to be an additional 4 seconds delay before I get to the console. The time doesn't really matter. I'm just wondering if there's a benefit to using spice-xpi? Jason. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Community Help for Austrian Conference?
Rene Koch is hosting a booth at Grazer Linuxtage on April 25[1], and would like to know if anyone from the EU would like to swing in and help man the booth on that day. It's about a 3.5 hour from Brno. Any takers? BKP [1] https://www.linuxtage.at/ -- Brian Proffitt Community Liaison oVirt Open Source and Standards, Red Hat - http://community.redhat.com Phone: +1 574 383 9BKP IRC: bkp @ OFTC ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] 答复: Migration failed, No available host found
HI, Jason Could u explain the detailed way of changing MTU? And how to check the root cause of MTU from log? -邮件原件- 发件人: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] 代表 Jason Keltz 发送时间: 2015年4月7日 0:14 收件人: users@ovirt.org 主题: Re: [ovirt-users] Migration failed, No available host found Hi Artyom, The problems were caused by an issue with MTU on the hosts. I have rectified the issue and can now migrate hosts. Jason. On 04/06/2015 10:57 AM, Jason Keltz wrote: Hi Artyom, Here are the vdsm logs from virt1, virt2 (where the node is running), and virt3. The logs from virt2 look suspicious, but still not sure the problem. http://goo.gl/GjbWUP Jason. On 04/06/2015 09:42 AM, Artyom Lukianov wrote: Engine try to migrate vm on some available host, but migration failed, so engine try another host. From some reason migration failed on all hosts: (org.ovirt.thread.pool-8-thread-38) [71f97a52] Command MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration, code = 12 For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source and also from destination hosts. Thanks - Original Message - From: Jason Keltz j...@cse.yorku.ca To: users users@ovirt.org Sent: Monday, April 6, 2015 3:47:23 PM Subject: [ovirt-users] Migration failed, No available host found Hi. I have 3 nodes in one cluster and 1 VM running on node2. I'm trying to move the VM to node 1 or node 3, and it fails with the error: Migration failed, No available host found I'm unable to decipher engine.log to determine the cause of the problem. Below is what seems to be the relevant lines from the log. Any help would be appreciated. Thank you! Jason. --- 2015-04-06 08:31:56,554 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) [3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM , sharedLocks= ] 2015-04-06 08:31:56,686 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: MigrateVmCommand internal: false. Entities affected : ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM with role type USER, ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group EDIT_VM_PROPERTIES with role type USER, ID: 8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group CREATE_VM with role type USER 2015-04-06 08:31:56,703 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeight PolicyUnit] (org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation scoring method 2015-04-06 08:31:56,727 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 7555acbd 2015-04-06 08:31:56,728 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateBrokerVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94 2015-04-06 08:31:56,734 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateBrokerVDSCommand, log id: 6d98fb94 2015-04-06 08:31:56,769 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd 2015-04-06 08:31:56,778 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector ] (org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, Destination: virt3, User: admin@internal). 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM
Re: [ovirt-users] Cannot install vdsm on RHEL7
- Original Message - From: knarra kna...@redhat.com To: users@ovirt.org Sent: Monday, April 6, 2015 12:50:45 PM Subject: [ovirt-users] Cannot install vdsm on RHEL7 Hi, I am trying to install vdsm on RHEL7 and i am hitting some dependency issue. I have following repos enabled on my system. can some one help me if i need to enable any other repos? epel.repo epel-testing.repo ovirt-master-dependencies.repo ovirt-master-snapshot.repo Here is the output from Yum install vdsm command -- Finished Dependency Resolution Error: Package: python-six-1.7.3-1.el6.noarch (epel) I am just guessing... you are trying to install vdsm of el6 on el7. Requires: python(abi) = 2.6 Installed: python-2.7.5-16.el7.x86_64 (@anaconda/7.1) python(abi) = 2.7 python(abi) = 2.7 Error: Package: unbound-libs-1.5.1-1.el6.x86_64 (epel) Requires: libpython2.6.so.1.0()(64bit) Error: Package: unbound-libs-1.5.1-1.el6.x86_64 (epel) Requires: libevent-1.4.so.2()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Thanks kasturi. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Cannot install vdsm on RHEL7
On Mon, Apr 6, 2015 at 9:50 PM, knarra kna...@redhat.com wrote: : Package: python-six-1.7.3-1.el6.noarch (epel) Hello, My very first post, so hello all but seems your email is epel6 and the rest is on rh7? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Cannot install vdsm on RHEL7
On 04/06/2015 03:30 PM, Alvaro Miranda Aguilera wrote: On Mon, Apr 6, 2015 at 9:50 PM, knarra kna...@redhat.com wrote: : Package: python-six-1.7.3-1.el6.noarch (epel) Hello, My very first post, so hello all but seems your email is epel6 and the rest is on rh7? Hi Alvaro, You are right. My epel repo is for RHEL6. Going to remove that and add repo for RHEL7 and try. Thanks kasturi. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Cannot install vdsm on RHEL7
Hi, I am trying to install vdsm on RHEL7 and i am hitting some dependency issue. I have following repos enabled on my system. can some one help me if i need to enable any other repos? epel.repo epel-testing.repo ovirt-master-dependencies.repo ovirt-master-snapshot.repo Here is the output from Yum install vdsm command -- Finished Dependency Resolution Error: Package: python-six-1.7.3-1.el6.noarch (epel) Requires: python(abi) = 2.6 Installed: python-2.7.5-16.el7.x86_64 (@anaconda/7.1) python(abi) = 2.7 python(abi) = 2.7 Error: Package: unbound-libs-1.5.1-1.el6.x86_64 (epel) Requires: libpython2.6.so.1.0()(64bit) Error: Package: unbound-libs-1.5.1-1.el6.x86_64 (epel) Requires: libevent-1.4.so.2()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Thanks kasturi. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Can't export template with 3.5.1
- Original Message - From: Bob Doolittle b...@doolittle.us.com To: users-ovirt users@ovirt.org Sent: Monday, April 6, 2015 2:05:10 AM Subject: Re: [ovirt-users] Can't export template with 3.5.1 No suggestions? This seems pretty basic. Any input would be appreciated. Thanks, Bob the cause is that for some reason vdsm thinks the image doesn't exist, i think it looks for it on the dest storage before the copy started..? maybe someone from the storage could help here... call for getVolumeInfo with the same image id, on src storage just before the copy request Thread-24222::DEBUG::2015-03-26 13:06:51,189::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': '81d75e8f-dcac-439b-9107-110eb4392425', 'volt ype': 'SHARED', 'description': 'Active VM', 'parent': '----', 'format': 'RAW', 'image': '8dabd35a-1bf0-4445-9182-19b9e90dcd84', 'ctime': '1427389373', 'disktype': '2', 'legality': 'LEGAL', 'allocType': 'SPARSE', 'mtime': '0', 'apparentsize': '21474836480', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': u'036800fe-220b-46b8-a30e-ef3302fcf257', 'truesize': '7800082432', 'ty pe': 'SPARSE'} call for copy with the right src/dst storage Thread-24225::DEBUG::2015-03-26 13:06:51,580::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Volume.copy' in bridge with {u'preallocate': 2, u'volFormat': 5, u'volType': 6, u'dstImgUUID': u'8dabd35a-1bf0-4445-9182-19b9e90dcd84', u'dstVolUUID': u'036800fe-220b-46b8-a30e-ef3302fcf257', u'dstSdUUID': u'b20fd7d9-0e75-43b0-9203-df3dd7630738', u'volumeID': u'036800fe-220b-46b8-a30e-ef3302fcf257', u'imageID': u'8dabd35a-1bf0-4445-9182-19b9e90dcd84', u'postZero': u'false', u'storagepoolID': u'0002-0002-0002-0002-0270', u'storagedomainID': u'81d75e8f-dcac-439b-9107-110eb4392425', u'force': u'false', u'desc': u''} then the image is not found on the dst sd: c50cc0bb-5ec7-445f-99e6-e413e3e2ee34::ERROR::2015-03-26 13:06:51,933::image::748::Storage.Image::(copyCollapsed) Unexpected error Traceback (most recent call last): File /usr/share/vdsm/storage/image.py, line 730, in copyCollapsed srcVolUUID=volume.BLANK_UUID) File /usr/share/vdsm/storage/sd.py, line 430, in createVolume preallocate, diskType, volUUID, desc, srcImgUUID, srcVolUUID) File /usr/share/vdsm/storage/volume.py, line 375, in create imgPath = image.Image(repoPath).create(sdUUID, imgUUID) File /usr/share/vdsm/storage/image.py, line 126, in create os.mkdir(imageDir) OSError: [Errno 2] No such file or directory: '/rhev/data-center/0002-0002-0002-0002-0270/b20fd7d9-0e75-43b0-9203-df3dd7630738/images/8dabd35a-1bf0-4445-9182-19b9e90dcd84' On 03/26/2015 01:19 PM, Bob Doolittle wrote: I created template from a VM, which completed successfully. Now I am trying to export that template, and I get an error: Failed to complete copy of Template Centos6 to Domain xion2-tmpexport. Any suggestions? There is plenty of free space on disk. I have attached my VDSM log and my engine and server logs. I am using Fedora 20 for the (single) host and the self-hosted engine, oVirt 3.5.1.1-1 Thanks, Bob I've linked 2 files to this email: vdsm.log (20.6 MB) Dropbox https://db.tt/MEbYnz2Z engine.log (2.4 MB) Dropbox https://db.tt/13hMN54l Mozilla Thunderbird makes it easy to share large files over email. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Anyone using YAML support in the RESTAPI?
On 04/02/2015 05:17 PM, Sven Kieske wrote: On 31/03/15 15:34, Juan Hernández wrote: Hello, Does anyone have a good reason to not remove the YAML support from the RESTAPI? As far as I know it isn't used or tested in any reasonable way, so I would like to remove it completely for 3.6. If anyone is using it and can't move to JSON or XML please speak up. I didn't even _know_ there was YAML support. I don't use it atm, but what is the intention of removal? Code cleanup? can't this get auto generated? The main reason is that currently we aren't maintaining or testing this YAML support in any way, so unexpected issues will likely arise if someone does. Thus I prefer to explicitly tell users to use one of the two really supported media types (XML and JSON). It also helps clearing a bit the code, but that is not a big concern. -- Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Migration failed, No available host found
Engine try to migrate vm on some available host, but migration failed, so engine try another host. From some reason migration failed on all hosts: (org.ovirt.thread.pool-8-thread-38) [71f97a52] Command MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration, code = 12 For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source and also from destination hosts. Thanks - Original Message - From: Jason Keltz j...@cse.yorku.ca To: users users@ovirt.org Sent: Monday, April 6, 2015 3:47:23 PM Subject: [ovirt-users] Migration failed, No available host found Hi. I have 3 nodes in one cluster and 1 VM running on node2. I'm trying to move the VM to node 1 or node 3, and it fails with the error: Migration failed, No available host found I'm unable to decipher engine.log to determine the cause of the problem. Below is what seems to be the relevant lines from the log. Any help would be appreciated. Thank you! Jason. --- 2015-04-06 08:31:56,554 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) [3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM , sharedLocks= ] 2015-04-06 08:31:56,686 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: MigrateVmCommand internal: false. Entities affected : ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM with role type USER, ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group EDIT_VM_PROPERTIES with role type USER, ID: 8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group CREATE_VM with role type USER 2015-04-06 08:31:56,703 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] (org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation scoring method 2015-04-06 08:31:56,727 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 7555acbd 2015-04-06 08:31:56,728 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateBrokerVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94 2015-04-06 08:31:56,734 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateBrokerVDSCommand, log id: 6d98fb94 2015-04-06 08:31:56,769 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd 2015-04-06 08:31:56,778 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, Destination: virt3, User: admin@internal). 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list 2015-04-06 08:33:17,661 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2 2015-04-06 08:33:17,666 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] START, MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923 2015-04-06 08:33:17,669 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] Failed in MigrateStatusVDS method 2015-04-06 08:33:17,670 INFO
[ovirt-users] Migration failed, No available host found
Hi. I have 3 nodes in one cluster and 1 VM running on node2. I'm trying to move the VM to node 1 or node 3, and it fails with the error: Migration failed, No available host found I'm unable to decipher engine.log to determine the cause of the problem. Below is what seems to be the relevant lines from the log. Any help would be appreciated. Thank you! Jason. --- 2015-04-06 08:31:56,554 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) [3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM , sharedLocks= ] 2015-04-06 08:31:56,686 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: MigrateVmCommand internal: false. Entities affected : ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM with role type USER, ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group EDIT_VM_PROPERTIES with role type USER, ID: 8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group CREATE_VM with role type USER 2015-04-06 08:31:56,703 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] (org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation scoring method 2015-04-06 08:31:56,727 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 7555acbd 2015-04-06 08:31:56,728 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateBrokerVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94 2015-04-06 08:31:56,734 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateBrokerVDSCommand, log id: 6d98fb94 2015-04-06 08:31:56,769 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd 2015-04-06 08:31:56,778 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, Destination: virt3, User: admin@internal). 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list 2015-04-06 08:33:17,661 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2 2015-04-06 08:33:17,666 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] START, MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923 2015-04-06 08:33:17,669 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] Failed in MigrateStatusVDS method 2015-04-06 08:33:17,670 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12, mMessage=Fatal error during migration]] 2015-04-06 08:33:17,670 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] HostName = virt2 2015-04-06 08:33:17,670 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] Command MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to
[ovirt-users] ovirt 3.1 to 3.2 up-gradation problem
after upgradation ovirt engine 3.1 to 3.2 all of my vdsm node goes to non responding. Error message showing This host is in non responding state. Try to Activate it; If the problem persists, switch Host to Maintenance mode and try to reinstall it. I have tried to reinstall multiple times but still no success . Please help . Regards Suvro ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Modules sanlock are not configured
Hi all, I have CentOS 7 running and I added the oVirt 3.5 repo to it. I open the oVirt Manager and added a new system to it. The manager says installing and after that is fails to connect. Looking on the system I see: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running mkdirs Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running configure_coredump Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running configure_vdsm_logs Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running wait_for_network Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running run_init_hooks Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running upgraded_version_check Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running check_is_configured Mar 30 21:09:37 vdsmd_init_common.sh[7106]: Error: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: One of the modules is not configured to work with VDSM. Mar 30 21:09:37 vdsmd_init_common.sh[7106]: To configure the module use the following: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: 'vdsm-tool configure [--module module-name]'. Mar 30 21:09:37 vdsmd_init_common.sh[7106]: If all modules are not configured try to use: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: 'vdsm-tool configure --force' Mar 30 21:09:37 vdsmd_init_common.sh[7106]: (The force flag will stop the module's service and start it Mar 30 21:09:37 vdsmd_init_common.sh[7106]: afterwards automatically to load the new configuration.) Mar 30 21:09:37 vdsmd_init_common.sh[7106]: libvirt is already configured for vdsm Mar 30 21:09:37 vdsmd_init_common.sh[7106]: Modules sanlock are not configured Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: stopped during execute check_is_configured task (task returned with error code 1). So I run vdsm-tool configure after I stop sanlock. # vdsm-tool configure Checking configuration status... libvirt is already configured for vdsm SUCCESS: ssl configured to true. No conflicts Running configure... Reconfiguration of sanlock is done. Done configuring modules to VDSM. But when I want to start vdsmd it still gives the error that sanlock is not configured. Does somebody has a solution for this? I am a bit lost on this… Google only tells me that there was a bug in 3.4. Thanks in advance, Jurriën___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Modules sanlock are not configured
- Original Message - Hi all, I have CentOS 7 running and I added the oVirt 3.5 repo to it. I open the oVirt Manager and added a new system to it. The manager says installing and after that is fails to connect. Looking on the system I see: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running mkdirs Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running configure_coredump Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running configure_vdsm_logs Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running wait_for_network Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running run_init_hooks Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running upgraded_version_check Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: Running check_is_configured Mar 30 21:09:37 vdsmd_init_common.sh[7106]: Error: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: One of the modules is not configured to work with VDSM. Mar 30 21:09:37 vdsmd_init_common.sh[7106]: To configure the module use the following: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: 'vdsm-tool configure [--module module-name]'. Mar 30 21:09:37 vdsmd_init_common.sh[7106]: If all modules are not configured try to use: Mar 30 21:09:37 vdsmd_init_common.sh[7106]: 'vdsm-tool configure --force' Mar 30 21:09:37 vdsmd_init_common.sh[7106]: (The force flag will stop the module's service and start it Mar 30 21:09:37 vdsmd_init_common.sh[7106]: afterwards automatically to load the new configuration.) Mar 30 21:09:37 vdsmd_init_common.sh[7106]: libvirt is already configured for vdsm Mar 30 21:09:37 vdsmd_init_common.sh[7106]: Modules sanlock are not configured Mar 30 21:09:37 vdsmd_init_common.sh[7106]: vdsm: stopped during execute check_is_configured task (task returned with error code 1). So I run vdsm-tool configure after I stop sanlock. # vdsm-tool configure Checking configuration status... libvirt is already configured for vdsm SUCCESS: ssl configured to true. No conflicts Running configure... Reconfiguration of sanlock is done. Done configuring modules to VDSM. But when I want to start vdsmd it still gives the error that sanlock is not configured. Does somebody has a solution for this? This thread from a few days ago might be relevant. https://www.mail-archive.com/users@ovirt.org/msg25531.html especially the groups sanlock part I am a bit lost on this… Google only tells me that there was a bug in 3.4. Thanks in advance, Jurriën ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Migration failed, No available host found
Hi Artyom, Here are the vdsm logs from virt1, virt2 (where the node is running), and virt3. The logs from virt2 look suspicious, but still not sure the problem. http://goo.gl/GjbWUP Jason. On 04/06/2015 09:42 AM, Artyom Lukianov wrote: Engine try to migrate vm on some available host, but migration failed, so engine try another host. From some reason migration failed on all hosts: (org.ovirt.thread.pool-8-thread-38) [71f97a52] Command MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration, code = 12 For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source and also from destination hosts. Thanks - Original Message - From: Jason Keltz j...@cse.yorku.ca To: users users@ovirt.org Sent: Monday, April 6, 2015 3:47:23 PM Subject: [ovirt-users] Migration failed, No available host found Hi. I have 3 nodes in one cluster and 1 VM running on node2. I'm trying to move the VM to node 1 or node 3, and it fails with the error: Migration failed, No available host found I'm unable to decipher engine.log to determine the cause of the problem. Below is what seems to be the relevant lines from the log. Any help would be appreciated. Thank you! Jason. --- 2015-04-06 08:31:56,554 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) [3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM , sharedLocks= ] 2015-04-06 08:31:56,686 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: MigrateVmCommand internal: false. Entities affected : ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM with role type USER, ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group EDIT_VM_PROPERTIES with role type USER, ID: 8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group CREATE_VM with role type USER 2015-04-06 08:31:56,703 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] (org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation scoring method 2015-04-06 08:31:56,727 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 7555acbd 2015-04-06 08:31:56,728 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateBrokerVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94 2015-04-06 08:31:56,734 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateBrokerVDSCommand, log id: 6d98fb94 2015-04-06 08:31:56,769 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd 2015-04-06 08:31:56,778 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, Destination: virt3, User: admin@internal). 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list 2015-04-06 08:33:17,661 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2 2015-04-06 08:33:17,666 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] START, MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923 2015-04-06 08:33:17,669 ERROR
Re: [ovirt-users] Migration failed, No available host found
Hi Artyom, The problems were caused by an issue with MTU on the hosts. I have rectified the issue and can now migrate hosts. Jason. On 04/06/2015 10:57 AM, Jason Keltz wrote: Hi Artyom, Here are the vdsm logs from virt1, virt2 (where the node is running), and virt3. The logs from virt2 look suspicious, but still not sure the problem. http://goo.gl/GjbWUP Jason. On 04/06/2015 09:42 AM, Artyom Lukianov wrote: Engine try to migrate vm on some available host, but migration failed, so engine try another host. From some reason migration failed on all hosts: (org.ovirt.thread.pool-8-thread-38) [71f97a52] Command MigrateStatusVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration, code = 12 For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source and also from destination hosts. Thanks - Original Message - From: Jason Keltz j...@cse.yorku.ca To: users users@ovirt.org Sent: Monday, April 6, 2015 3:47:23 PM Subject: [ovirt-users] Migration failed, No available host found Hi. I have 3 nodes in one cluster and 1 VM running on node2. I'm trying to move the VM to node 1 or node 3, and it fails with the error: Migration failed, No available host found I'm unable to decipher engine.log to determine the cause of the problem. Below is what seems to be the relevant lines from the log. Any help would be appreciated. Thank you! Jason. --- 2015-04-06 08:31:56,554 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) [3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM , sharedLocks= ] 2015-04-06 08:31:56,686 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: MigrateVmCommand internal: false. Entities affected : ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM with role type USER, ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group EDIT_VM_PROPERTIES with role type USER, ID: 8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group CREATE_VM with role type USER 2015-04-06 08:31:56,703 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] (org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation scoring method 2015-04-06 08:31:56,727 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 7555acbd 2015-04-06 08:31:56,728 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] START, MigrateBrokerVDSCommand(HostName = virt2, HostId = 1d1d1fbb-3067-4703-8b51-e0a231d344e6, vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, dstHost=192.168.0.36:54321, migrationMethod=ONLINE, tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94 2015-04-06 08:31:56,734 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateBrokerVDSCommand, log id: 6d98fb94 2015-04-06 08:31:56,769 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd 2015-04-06 08:31:56,778 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, Destination: virt3, User: admin@internal). 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up 2015-04-06 08:33:17,633 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list 2015-04-06 08:33:17,661 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2 2015-04-06 08:33:17,666 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-38) [71f97a52] START, MigrateStatusVDSCommand(HostName = virt2, HostId =
Re: [ovirt-users] VMs freezing during heals
I hadn’t revisited it yet, but it is possible to use cgroups to limit glusterfs’s cpu usage, might help you out. Andrew Wklau has a blog post about it: http://www.andrewklau.com/controlling-glusterfsd-cpu-outbreaks-with-cgroups/ Careful about how far you throttle it down, if it’s your VMs disk it’s rebuilding, you’ll pause it anyway I’d expect. On Apr 4, 2015, at 8:57 AM, Jorick Astrego j.astr...@netbulae.eu wrote: On 04/03/2015 10:04 PM, Alastair Neil wrote: Any follow up on this? Are there known issues using a replica 3 glsuter datastore with lvm thin provisioned bricks? On 20 March 2015 at 15:22, Alastair Neil ajneil.t...@gmail.com mailto:ajneil.t...@gmail.com wrote: CentOS 6.6 vdsm-4.16.10-8.gitc937927.el6 glusterfs-3.6.2-1.el6 2.6.32 - 504.8.1.el6.x86_64 moved to 3.6 specifically to get the snapshotting feature, hence my desire to migrate to thinly provisioned lvm bricks. Well on the glusterfs mailinglist there have been discussions: 3.6.2 is a major release and introduces some new features in cluster wide concept. Additionally it is not stable yet. On 20 March 2015 at 14:57, Darrell Budic bu...@onholyground.com mailto:bu...@onholyground.com wrote: What version of gluster are you running on these? I’ve seen high load during heals bounce my hosted engine around due to overall system load, but never pause anything else. Cent 7 combo storage/host systems, gluster 3.5.2. On Mar 20, 2015, at 9:57 AM, Alastair Neil ajneil.t...@gmail.com mailto:ajneil.t...@gmail.com wrote: Pranith I have run a pretty straightforward test. I created a two brick 50 G replica volume with normal lvm bricks, and installed two servers, one centos 6.6 and one centos 7.0. I kicked off bonnie++ on both to generate some file system activity and then made the volume replica 3. I saw no issues on the servers. Not clear if this is a sufficiently rigorous test and the Volume I have had issues on is a 3TB volume with about 2TB used. -Alastair On 19 March 2015 at 12:30, Alastair Neil ajneil.t...@gmail.com mailto:ajneil.t...@gmail.com wrote: I don't think I have the resources to test it meaningfully. I have about 50 vms on my primary storage domain. I might be able to set up a small 50 GB volume and provision 2 or 3 vms running test loads but I'm not sure it would be comparable. I'll give it a try and let you know if I see similar behaviour. On 19 March 2015 at 11:34, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: Without thinly provisioned lvm. Pranith On 03/19/2015 08:01 PM, Alastair Neil wrote: do you mean raw partitions as bricks or simply with out thin provisioned lvm? On 19 March 2015 at 00:32, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: Could you let me know if you see this problem without lvm as well? Pranith On 03/18/2015 08:25 PM, Alastair Neil wrote: I am in the process of replacing the bricks with thinly provisioned lvs yes. On 18 March 2015 at 09:35, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: hi, Are you using thin-lvm based backend on which the bricks are created? Pranith On 03/18/2015 02:05 AM, Alastair Neil wrote: I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There are two virtualisation clusters one with two nehelem nodes and one with four sandybridge nodes. My master storage domain is a GlusterFS backed by a replica 3 gluster volume from 3 of the gluster nodes. The engine is a hosted engine 3.5.1 on 3 of the sandybridge nodes, with storage broviede by nfs from a different gluster volume. All the hosts are CentOS 6.6. vdsm-4.16.10-8.gitc937927.el6 glusterfs-3.6.2-1.el6 2.6.32 - 504.8.1.el6.x86_64 Problems happen when I try to add a new brick or replace a brick eventually the self heal will kill the VMs. In the VM's logs I see kernel hung task messages. Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for more than 120 seconds. Mar 12 23:05:16 static1 kernel: Not tainted 2.6.32-504.3.3.el6.x86_64 #1 Mar 12 23:05:16 static1 kernel: echo 0 /proc/sys/kernel/hung_task_timeout_secs disables this message. Mar 12 23:05:16 static1 kernel: nginx D 0001 0 1736 1735 0x0080 Mar 12 23:05:16 static1 kernel: 8800778b17a8 0082 000126c0 Mar 12 23:05:16 static1 kernel: 88007e5c6500 880037170080 0006ce5c85bd9185 88007e5c64d0 Mar 12 23:05:16 static1 kernel: 88007a614ae0 0001722b64ba 88007a615098 8800778b1fd8 Mar 12 23:05:16 static1 kernel: Call Trace: Mar 12 23:05:16 static1 kernel: [8152a885] schedule_timeout+0x215/0x2e0 Mar 12 23:05:16 static1 kernel: [8152a503] wait_for_common+0x123/0x180 Mar 12 23:05:16 static1 kernel: [81064b90] ?