[Users] cannot assign network without IP address
Hi, I'm using oVirt 3.1 on CentOS 6.3 and have the following issue If I add a logical network to the datacenter i cannot assign it to a host without giving it an IP address. I just want to use the logical network as a bridge without specifying an IP address (None option in the network settings). This is the error message : Error while executing action Setup Networks: Illegal or Incomplete IP Address In the log on the engine i see the following error : VDSGenericException: VDSNetworkException: Specified netmask or gateway but not ip To me this is complete bogus since none of my nics have any ip settings defined. Only the ovirtmgmt network has this which is copied from the initial network setup during the ovirt-engine install I also found this post which discusses the same problem : http://www.mail-archive.com/users@ovirt.org/msg06261.html When adding the logical network to the datacenter I can also not unset the VM Network option. In our RHEV setup we can do this perfectly. Regards, Vincent ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] way to edit iSCSI storage domain?
I guess you misunderstood me I'm going to try this scheme: |STORAGE| FC / \ |SERV1/tgtd||SERV2/tgtd| iSCSI \ / |ethernet switches| iSCSI |blades|blades|blades| serv1/serv2 - connectivity isnt a problem, multipathed FC scheme, all good. Same lun accessible for both servers and than exported via tgtd to iSCSI: with different target names (iqn.2013-03.serv1:store, iqn.2013-03.serv2:store), but same vendor_id, product_id, scsi_sn, scsi_id. That way client can login into both targets and see lun as multipathed device. And multipath failover scheme (via custom config with path_grouping_policy=failover for corresponding vendor_id/product_id) is on blades-clients - so they use only one target at time (no round-robin or similar stuff), but with ability to switch to another target in case one of serv1/serv2 is down. However, in my case serv2 would not be available during oVirt setup (need to setup ovirt and virtual servers to move stuff first), so i cant enter both targets on storage domain initialization - that's why I'm asking if there's any way to edit storage domain details after initialization without destroying it (maybe directly via sql or something). Yuriy Demchenko On 04/02/2013 06:26 PM, Shu Ming wrote: I am not sure if the multipathd can recognize the FC path to the storage when the second server is available and regards it as the same as the iSCSI path used before. If it is not, I think the device under /dev/mapper may change when you cut the iSCSI path off and then enable FC path. That will definitely corrupt the meta data of the volume group which the storage domain is sitting on and the storage domain will be corrupted finally. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Getting the following error when creating rpm on CentOS
On 03/29/2013 07:29 AM, qyddbear wrote: Hi, I am trying to create a rpm using ovirt-engine-3.1.0-3.26.3.el6.centos.alt.src.rpm on CentOS 6.3. After ran rpmbuild -ba ovirt-engine.spec, I got error message like this: Version 3.1.0 is not prepared to be built on CentOS. If you want to build it you will need to apply the patches prepared by Dreyou: http://www.dreyou.org/ovirt/ Or you can use version 3.2.1 which already includes similar changes. *** Deploying service # Install the files: install -dm 755 /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/ovirt-engine/service install -m 644 packaging/fedora/engine-service.xml.in /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/ovirt-engine/service install -m 644 packaging/fedora/engine-service-logging.properties /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/ovirt-engine/service install -m 755 packaging/fedora/engine-service.py /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/ovirt-engine/service install -m 644 packaging/fedora/engine-service.sysconfig /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/etc/sysconfig/ovirt-engine install -m 644 packaging/fedora/engine-service.limits /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/etc/security/limits.d/10-ovirt-engine.conf install -m 755 packaging/fedora/engine-service.systemv /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/etc/rc.d/init.d/ovirt-engine # Install the links: ln -s /usr/share/ovirt-engine/service/engine-service.py /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/bin/engine-service + install -dm 755 /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64//var/lib/ovirt-engine/deployments + install -dm 755 /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64//var/lib/ovirt-engine/content + install -dm 755 /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64//var/log/ovirt-engine/notifier /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64//var/log/ovirt-engine/engine-manage-domains + install -dm 755 /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64//var/run/ovirt-engine/notifier + install -dm 755 /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64//var/lock/ovirt-engine + for war in restapi userportal webadmin + sed -i 's#transport-guaranteeNONE/transport-guarantee#transport-guaranteeCONFIDENTIAL/transport-guarantee#' /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/ovirt-engine/engine.ear/restapi.war/WEB-INF/web.xml + for war in restapi userportal webadmin + sed -i 's#transport-guaranteeNONE/transport-guarantee#transport-guaranteeCONFIDENTIAL/transport-guarantee#' /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/ovirt-engine/engine.ear/userportal.war/WEB-INF/web.xml + for war in restapi userportal webadmin + sed -i 's#transport-guaranteeNONE/transport-guarantee#transport-guaranteeCONFIDENTIAL/transport-guarantee#' /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/ovirt-engine/engine.ear/webadmin.war/WEB-INF/web.xml + for pom in '/root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/maven2/poms/*.pom' ++ dirname /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/maven2/poms/ovirt-engine-backend.pom + pomdir=/root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/maven2/poms ++ basename /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/maven2/poms/ovirt-engine-backend.pom + pom=ovirt-engine-backend.pom + jpppom=JPP.ovirt-engine-backend.pom + mv /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/maven2/poms/ovirt-engine-backend.pom /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/maven2/poms/JPP.ovirt-engine-backend.pom ++ sed -e 's/^ovirt-engine-//' -e 's/\.pom//' ++ echo ovirt-engine-backend.pom + artifact_id=backend + '[' -f /root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64/usr/share/java/ovirt-engine/backend.jar ']' + %add_maven_depmap JPP.ovirt-engine-backend.pom /var/tmp/rpm-tmp.hR92jC: line 86: fg: no job control error: Bad exit status from /var/tmp/rpm-tmp.hR92jC (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.hR92jC (%install) And here is rpm-tmp.hR92jC: #!/bin/sh RPM_SOURCE_DIR=/root/rpmbuild/SOURCES RPM_BUILD_DIR=/root/rpmbuild/BUILD RPM_OPT_FLAGS=-O2 -g RPM_ARCH=x86_64 RPM_OS=linux export RPM_SOURCE_DIR RPM_BUILD_DIR RPM_OPT_FLAGS RPM_ARCH RPM_OS RPM_DOC_DIR=/usr/share/doc export RPM_DOC_DIR RPM_PACKAGE_NAME=ovirt-engine RPM_PACKAGE_VERSION=3.1.0 RPM_PACKAGE_RELEASE=3.26.3.el6 export RPM_PACKAGE_NAME RPM_PACKAGE_VERSION RPM_PACKAGE_RELEASE LANG=C export LANG unset CDPATH DISPLAY ||: RPM_BUILD_ROOT=/root/rpmbuild/BUILDROOT/ovirt-engine-3.1.0-3.26.3.el6.x86_64
Re: [Users] way to edit iSCSI storage domain?
I think i'd just add the 2nd path when the device is available ... i've recently exprimented with iscsi / tgtd and multipath on a ovirt hyper-visor and it will identify the disk as the same (new path to target) as long as the LUN ID is the same (this is taken from experience, not from a spec document) ... On 3 April 2013 08:14, Yuriy Demchenko demchenko...@gmail.com wrote: I guess you misunderstood me I'm going to try this scheme: |STORAGE| FC / \ |SERV1/tgtd||SERV2/tgtd| iSCSI \ / |ethernet switches| iSCSI |blades|blades|blades| serv1/serv2 - connectivity isnt a problem, multipathed FC scheme, all good. Same lun accessible for both servers and than exported via tgtd to iSCSI: with different target names (iqn.2013-03.serv1:store, iqn.2013-03.serv2:store), but same vendor_id, product_id, scsi_sn, scsi_id. That way client can login into both targets and see lun as multipathed device. And multipath failover scheme (via custom config with path_grouping_policy=failover for corresponding vendor_id/product_id) is on blades-clients - so they use only one target at time (no round-robin or similar stuff), but with ability to switch to another target in case one of serv1/serv2 is down. However, in my case serv2 would not be available during oVirt setup (need to setup ovirt and virtual servers to move stuff first), so i cant enter both targets on storage domain initialization - that's why I'm asking if there's any way to edit storage domain details after initialization without destroying it (maybe directly via sql or something). Yuriy Demchenko On 04/02/2013 06:26 PM, Shu Ming wrote: I am not sure if the multipathd can recognize the FC path to the storage when the second server is available and regards it as the same as the iSCSI path used before. If it is not, I think the device under /dev/mapper may change when you cut the iSCSI path off and then enable FC path. That will definitely corrupt the meta data of the volume group which the storage domain is sitting on and the storage domain will be corrupted finally. __**_ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/usershttp://lists.ovirt.org/mailman/listinfo/users -- | RHCE | Sen Sys Engineer / Platform Architect | www.vcore.co | www.vsearchcloud.com | ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] does ovirt is integrated with svirt
i think better question is, will Ovirt eventually integrate into OpenStack ? On 3 April 2013 09:55, bigclouds bigclo...@163.com wrote: hi,all does ovirt is integrated with svirt? thanks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- | RHCE | Sen Sys Engineer / Platform Architect | www.vcore.co | www.vsearchcloud.com | ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Storage problem
Hello Maor, I have alredy been able to recover my system and my NFS domains. Many thanks, Juanjo. On Tue, Mar 26, 2013 at 9:11 PM, Juan Jose jj197...@gmail.com wrote: Many thanks Maor, Yes my Host is UP without problems. I will attach a new Storage as Master domain and I will see if after that I will be able to delete VMs and VHD. Thanks again and I will inform about the progress. Juanjo. On Thu, Mar 21, 2013 at 5:13 PM, Maor Lipchuk mlipc...@redhat.com wrote: On 03/21/2013 06:12 PM, Maor Lipchuk wrote: Was VDSM restart did any change? If your host is UP and running you can try to Re-initialize your data center (By right click on the Data Center) and pick a new storage to be the master. You will need first to add a new storage to the setup This will change your DC to be active again, but you will still need to delete the disks from the old storage. If your old storage can be active after the reinitialize there will be no problem to delete the disks. On 03/21/2013 06:05 PM, Juan Jose wrote: I'm using oVirt 3.1. I have checked my storage domain and because a the power failure that I had I have lost all my VM in the storage. Now I would like to know how can I delete my VMs from Admin portal. For now it is impossible because Default Data Center is down because I can't attach Master domain. I'm in a kind of loop that I don't now how to fix. I guess that it is necessary to delete VMs from DB directly but I'm asking if it is possible to have some kind of workaround for these situations. Many thanks in avanced, Juanjo. On Thu, Mar 21, 2013 at 4:52 PM, Maor Lipchuk mlipc...@redhat.com mailto:mlipc...@redhat.com wrote: thanks for the logs. What ovirt version are you using? Can you try to restart VDSM service on your host. Regards, Maor On 03/21/2013 01:16 PM, Juan Jose wrote: Hello Maor, I have tried to apply the bug procedure and nothing happens and if I lauch the query: psql -U engine -c SELECT option_value FROM vdc_options where option_name = 'AutoRecoveryAllowedTypes'; engine And the result is: option_value - (0 rows) And my Master Storage continue disable. Attach engine.log also. Many thanks, Juanjo. On Wed, Mar 20, 2013 at 5:02 PM, Maor Lipchuk mlipc...@redhat.com mailto:mlipc...@redhat.com mailto:mlipc...@redhat.com mailto:mlipc...@redhat.com wrote: On 03/20/2013 05:58 PM, Maor Lipchuk wrote: Hi Juan, I think you encountered this bug https://bugzilla.redhat.com/881941, the log there is quite the same. The auto recovery process should fix that after 15 minutes, but need to see if it is enabled in your environment. The auto recovery should be enabled by default on your env, but just to make sure you can check it in the engine DB with this query: SELECT option_value FROM vdc_options where option_name = 'AutoRecoveryAllowedTypes'; On 03/20/2013 05:29 PM, Juan Jose wrote: I forgot the vdsm.log file, Thanks, Juanjo. ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org mailto:Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org mailto:Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] way to edit iSCSI storage domain?
You mean add new path by hands on each node via iscsiadm ? And how that changes survive possible node reboots / reinstalls, as i suppose - it wouldn't? In ovirt webadmin i cannot edit added domain - connection information is greyed out (even when storage domain in maintenance mode) Yuriy Demchenko On 04/03/2013 01:00 PM, Alex Leonhardt wrote: I think i'd just add the 2nd path when the device is available ... i've recently exprimented with iscsi / tgtd and multipath on a ovirt hyper-visor and it will identify the disk as the same (new path to target) as long as the LUN ID is the same (this is taken from experience, not from a spec document) ... On 3 April 2013 08:14, Yuriy Demchenko demchenko...@gmail.com mailto:demchenko...@gmail.com wrote: I guess you misunderstood me I'm going to try this scheme: |STORAGE| FC / \ |SERV1/tgtd||SERV2/tgtd| iSCSI \ / |ethernet switches| iSCSI |blades|blades|blades| serv1/serv2 - connectivity isnt a problem, multipathed FC scheme, all good. Same lun accessible for both servers and than exported via tgtd to iSCSI: with different target names (iqn.2013-03.serv1:store, iqn.2013-03.serv2:store), but same vendor_id, product_id, scsi_sn, scsi_id. That way client can login into both targets and see lun as multipathed device. And multipath failover scheme (via custom config with path_grouping_policy=failover for corresponding vendor_id/product_id) is on blades-clients - so they use only one target at time (no round-robin or similar stuff), but with ability to switch to another target in case one of serv1/serv2 is down. However, in my case serv2 would not be available during oVirt setup (need to setup ovirt and virtual servers to move stuff first), so i cant enter both targets on storage domain initialization - that's why I'm asking if there's any way to edit storage domain details after initialization without destroying it (maybe directly via sql or something). Yuriy Demchenko On 04/02/2013 06:26 PM, Shu Ming wrote: I am not sure if the multipathd can recognize the FC path to the storage when the second server is available and regards it as the same as the iSCSI path used before. If it is not, I think the device under /dev/mapper may change when you cut the iSCSI path off and then enable FC path. That will definitely corrupt the meta data of the volume group which the storage domain is sitting on and the storage domain will be corrupted finally. ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- | RHCE | Sen Sys Engineer / Platform Architect | www.vcore.co http://www.vcore.co | www.vsearchcloud.com http://www.vsearchcloud.com | ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] AllInOne installation issue
On 03/27/2013 10:59 PM, James A. Peltier wrote: Verify you aren't getting any locale errors such as LANG and such not being found or set. This stopped me from moving forward because the locale was set improperly during kickstart. Setting the locale correctly in /etc/locale.conf fixed it for me James - is there a bug to fix the issue you failed on? Thanks, Itamar When running the AllInOne installation ¨engine-setup --config-allinone=yes¨ the last step of my installation always fails. I freshly setup a fedora 18 box (minimal) and directly installed ovirt: sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm sudo yum install ovirt-engine-setup-plugin-allinone engine-setup --config-allinone=yes I also tried with the nightly build which did not solve the issue for me. Any idea how to fix this? This is the end of my install log: 2013-03-26 20:28:07::DEBUG::engine-setup::1965::root:: *** The following params were used as user input: 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: override-httpd-config: yes 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: http-port: 80 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: https-port: 443 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: random-passwords: no 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: mac-range: 00:1A:4A:A8:02:00-00:1A:4A:A8:02:FF 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: host-fqdn: localhost.localdomain 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: auth-pass: 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: org-name: localdomain 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: application-mode: both 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: default-dc-type: NFS 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: db-remote-install: local 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: db-host: localhost 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: db-local-pass: 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: nfs-mp: /var/lib/exports/iso 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: iso-domain-name: ISO_DOMAIN 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: config-nfs: yes 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: override-firewall: None 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: config-allinone: yes 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: storage-path: /var/lib/images 2013-03-26 20:28:07::DEBUG::engine-setup::1970::root:: superuser-pass: 2013-03-26 20:28:07::ERROR::engine-setup::2385::root:: Traceback (most recent call last): File /bin/engine-setup, line 2379, in module main(confFile) File /bin/engine-setup, line 2162, in main runSequences() File /bin/engine-setup, line 2085, in runSequences controller.runAllSequences() File /usr/share/ovirt-engine/scripts/setup_controller.py, line 54, in runAllSequences sequence.run() File /usr/share/ovirt-engine/scripts/setup_sequences.py, line 154, in run step.run() File /usr/share/ovirt-engine/scripts/setup_sequences.py, line 60, in run function() File /usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py, line 300, in waitForHostUp utils.retry(isHostUp, tries=120, timeout=600, sleep=5) File /usr/share/ovirt-engine/scripts/common_utils.py, line 1009, in retry raise e RetryFailException: Error: Host was found in a 'Failed' state. Please check engine and bootstrap installation logs. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- James A. Peltier Manager, IT Services - Research Computing Group Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpelt...@sfu.ca Website : http://www.sfu.ca/itservices “A successful person is one who can lay a solid foundation from the bricks others have thrown at them.” -David Brinkley via Luke Shaw ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] way to edit iSCSI storage domain?
but you can add a iscsi disk to a existing iscsi domain ... see attached screenshot .. .if the disk you're adding has the same LUN ID as the already existing one, ovirt will just add it as a 2nd / 3rd / 4th and so forth path ... On 3 April 2013 10:49, Yuriy Demchenko demchenko...@gmail.com wrote: You mean add new path by hands on each node via iscsiadm ? And how that changes survive possible node reboots / reinstalls, as i suppose - it wouldn't? In ovirt webadmin i cannot edit added domain - connection information is greyed out (even when storage domain in maintenance mode) Yuriy Demchenko On 04/03/2013 01:00 PM, Alex Leonhardt wrote: I think i'd just add the 2nd path when the device is available ... i've recently exprimented with iscsi / tgtd and multipath on a ovirt hyper-visor and it will identify the disk as the same (new path to target) as long as the LUN ID is the same (this is taken from experience, not from a spec document) ... On 3 April 2013 08:14, Yuriy Demchenko demchenko...@gmail.com wrote: I guess you misunderstood me I'm going to try this scheme: |STORAGE| FC / \ |SERV1/tgtd||SERV2/tgtd| iSCSI \ / |ethernet switches| iSCSI |blades|blades|blades| serv1/serv2 - connectivity isnt a problem, multipathed FC scheme, all good. Same lun accessible for both servers and than exported via tgtd to iSCSI: with different target names (iqn.2013-03.serv1:store, iqn.2013-03.serv2:store), but same vendor_id, product_id, scsi_sn, scsi_id. That way client can login into both targets and see lun as multipathed device. And multipath failover scheme (via custom config with path_grouping_policy=failover for corresponding vendor_id/product_id) is on blades-clients - so they use only one target at time (no round-robin or similar stuff), but with ability to switch to another target in case one of serv1/serv2 is down. However, in my case serv2 would not be available during oVirt setup (need to setup ovirt and virtual servers to move stuff first), so i cant enter both targets on storage domain initialization - that's why I'm asking if there's any way to edit storage domain details after initialization without destroying it (maybe directly via sql or something). Yuriy Demchenko On 04/02/2013 06:26 PM, Shu Ming wrote: I am not sure if the multipathd can recognize the FC path to the storage when the second server is available and regards it as the same as the iSCSI path used before. If it is not, I think the device under /dev/mapper may change when you cut the iSCSI path off and then enable FC path. That will definitely corrupt the meta data of the volume group which the storage domain is sitting on and the storage domain will be corrupted finally. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- | RHCE | Sen Sys Engineer / Platform Architect | www.vcore.co | www.vsearchcloud.com | -- | RHCE | Sen Sys Engineer / Platform Architect | www.vcore.co | www.vsearchcloud.com | attachment: ovirt-iscsi-edit-domain.png___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.2.1 and node 2.6.1
Am Mittwoch, den 03.04.2013, 16:46 +0200 schrieb Fabian Deutsch: Hey Martin, Am Dienstag, den 02.04.2013, 20:57 + schrieb martin.krali...@accenture.com: I would like to ask if anybody have problem with add ovirt node 2.6.1 to ovirt 3.2.1 Hey, I just tried registering an ovirt-node-2.6.1 instance with engine 3.2.0 - and yes - I can not register it due to some problem on Node's engine registration page. I must correct myself - I could register a Node with 3.2.0 (3.2.1 is outstanding - still downloading the image). The issues I was seeing were due to the fact that I tried to add a virtualized node, this fails (because the hw virtualization is missing). But aside from this problem the registration works. Could you explain the issues you are seeing? This is still relevant - What problem are you seeing? - fabian ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] way to edit iSCSI storage domain?
Yuriy Demchenko: I guess you misunderstood me I'm going to try this scheme: |STORAGE| FC / \ |SERV1/tgtd||SERV2/tgtd| iSCSI \ / |ethernet switches| iSCSI |blades|blades|blades| I am still confused if SERV1/SERV2 are used as a iSCSI bridge to the storage only or used as VDSM host also? If they are used as VDSM host and both of them can have FC channel to the storage, why not create FC storage domain for them instead of iSCSI domain? serv1/serv2 - connectivity isnt a problem, multipathed FC scheme, all good. Same lun accessible for both servers and than exported via tgtd to iSCSI: with different target names (iqn.2013-03.serv1:store, iqn.2013-03.serv2:store), but same vendor_id, product_id, scsi_sn, scsi_id. That way client can login into both targets and see lun as multipathed device. And multipath failover scheme (via custom config with path_grouping_policy=failover for corresponding vendor_id/product_id) is on blades-clients - so they use only one target at time (no round-robin or similar stuff), but with ability to switch to another target in case one of serv1/serv2 is down. However, in my case serv2 would not be available during oVirt setup (need to setup ovirt and virtual servers to move stuff first), so i cant enter both targets on storage domain initialization - that's why I'm asking if there's any way to edit storage domain details after initialization without destroying it (maybe directly via sql or something). Yuriy Demchenko On 04/02/2013 06:26 PM, Shu Ming wrote: I am not sure if the multipathd can recognize the FC path to the storage when the second server is available and regards it as the same as the iSCSI path used before. If it is not, I think the device under /dev/mapper may change when you cut the iSCSI path off and then enable FC path. That will definitely corrupt the meta data of the volume group which the storage domain is sitting on and the storage domain will be corrupted finally. -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail:shum...@cn.ibm.com orshum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] does ovirt is integrated with svirt
On 04/03/2013 11:55 AM, bigclouds wrote: hi,all does ovirt is integrated with svirt? thanks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users yes, using libvirt svirt support/capabilities. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Feature request: allow usbtablet and vram config via UI
On 03/27/2013 05:23 PM, Dead Horse wrote: Any thoughts/feedback on this one? It really is a downer to be forced to manipulate both of these via hooks. These are both pretty standard configuration items to enhance/tweak or make various guests fully functional. will track via the bugs. the vram for spice multi monitor i see the issue with. for usb tablet, don't we always enable it for vnc display? - DHC On Fri, Mar 22, 2013 at 9:12 AM, Dead Horse deadhorseconsult...@gmail.com mailto:deadhorseconsult...@gmail.com wrote: Allow for a usbtablet input device to be enabled and used. Perhaps via the console configuration UI for a VM in the admin and user portals. -Highly useful in the event that a guest OS does not have the spice agent loaded nor is it available for said guest OS. -Also in the case of guest OS that simply does not want to work and play well with input type='mouse' bus='ps2' - All Operating systems since ~1998 understand input type='tablet' bus='usb'/ and will deal with mouse events in absolute mode. Allow for vram and vram_ size to be configurable for CIrrus(VNC) and QXL(Spice) console types. Again probably best suited to have this on console configuration UI for a VM in the admin and user portals. - Default VRAM sizes are not enough to allow for larger resolutions and pixel depths at times. - Especially the case in multiple monitor SPICE, fullscreen, or VNC consoles - More VRAM is useful in the case of attempting to run accelerated applications within a QXL configured guest. The above can be altered via VDSM hooks however this is rather painful. Additionally these options or similar are configurable in other competing solutions. - DHC ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] error 400
Keep getting the following error from IE when logged into the Ovirt admin console: Error: A Request to the Server failed with the following Status Code: 400 I'm getting it from an all-in-one setup and also from a seperate Engine/ host configuration. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users