[ovirt-users] oVirt - High Availability
Hi users, I have setup OVIRT 3.5 with* 2 REDHAT 7.1 hosts.* Everything is ok other than HA ( High Availability). To test HA, Documentation says, *Power Management is needed. * Could you pls let me know if this Power Management is a separate Device or does it come with a BRANDED Server such as HP, IBM or Dell? I have seen a* ILO* port in HP servers. Can I use it for HA ( High Availability) in ovirt? If *power management* is *present in Branded Servers*, Could you pls let me know some *Branded RHEL 7.1/CentOS 7.1 supported servers?* Then, I can use it for Production use. *This is an YOU TUBE video for HA.* https://www.youtube.com/watch?v=uHCnXGUMaS0 Is this a correct video for HA? I did some research. a few URLs. http://lists.ovirt.org/pipermail/users/2013-January/011519.html http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Improving_Uptime_with_Virtual_Machine_High_Availability.html *what is **Soft-Fencing Hosts ?* ovirt doc gives below . ( http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience ) Soft-Fencing Hosts Sometimes a host becomes non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue. oVirt 3.3 introduces "soft-fencing over SSH". Prior to oVirt 3.3, non-responsive hosts were fenced only by external fencing devices. In oVirt 3.3, the fencing process has been expanded to include "SSH Soft Fencing", a process whereby oVirt attempts to restart VDSM via SSH on non-responsive hosts. If oVirt fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured. *BUT, It does NOT say how to set it up? * *is there any step by step doc for it? * HOPE TO hear from you? -- cat /etc/motd Thank you Indunil Jayasooriya http://www.theravadanet.net/ http://www.siyabas.lk/sinhala_how_to_install.html - Download Sinhala Fonts ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] vm cannot be started
hi qinglong, This fixed the problem of vm startup on my oVirt, you can try this. visit the following URL: http://www.jianshu.com/p/95ae81d9864c 2015-09-25 10:08 GMT+08:00 qinglong.d...@horebdata.cn < qinglong.d...@horebdata.cn>: > Hi all, > I have installed ovirt-hosted-engine-setup on one machine and it > used iscsi shared storage on another machine. I created and sealed a > windows xp template and then I created a vm based on the template. When the > vm was started at the first time I attached the floppy to it and use > sysprep to init it. Then I shutdown the vm, but after I shutdown it I > cannot start it. Here are the logs: > > VM test is down with error. Exit message: internal error Process exited while > reading console log output: char device redirected to /dev/pts/1 > > 2015-09-25T01:29:31.222028Z qemu-kvm: -drive > file=/var/run/vdsm/payload/6be72664-a80c-4d78-9a5b-f2bbe37c5b2e.4ebf24c33f6111e0dae20466f370de53.img,if=none,id=drive-fdc0-0-0,format=raw,serial=: > could not open disk image > /var/run/vdsm/payload/6be72664-a80c-4d78-9a5b-f2bbe37c5b2e.4ebf24c33f6111e0dae20466f370de53.img: > Permission denied. > Anyone can help? Thanks! > -- > Dolny > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] It is possible to use vdsClient / virsh to start VMs in the event that engine is down?
Hello all, I do not see a way to start vm's in the event that an engine is down. I see vdsClient -s 0 destroy works to shut them down. Also, is it still possible to use non-read-only virsh commands? i tried using saslpasswd2 to create an account, but that did not seem to work. Thanks! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Migrate VM to new cluster
I've recently done the same thing, two ways are possible: * offline (safer) : edit the parameters of your vms and change to the desired cluster. If you do the same with the online vm, engine will tell you it will be effective at the next reboot. * online : you can force a live migration from a cluster to an other one choosing the advanced parameter tab, but your target cluster must support at least a smaller CPU family than the source one, because of the qemu capabilities. Le 12/10/2015 17:24, Kevin COUSIN a écrit : Hi list, Hi upgrade my nodes from CentOS 6 to CentOS 7. I create a new cluster, but how can I migrate all my VM to the new cluster ? Thanks a lot Kevin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanc...@abes.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Migrate VM to new cluster
Hi list, Hi upgrade my nodes from CentOS 6 to CentOS 7. I create a new cluster, but how can I migrate all my VM to the new cluster ? Thanks a lot Kevin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Migrate VM to new cluster
> Hi list, > > Hi upgrade my nodes from CentOS 6 to CentOS 7. I create a new cluster, but how > can I migrate all my VM to the new cluster ? Sorry for the mistake, I upgrade my nodes from CentOS 6 to CentOS 7. I create a new cluster, but how can I migrate all my VM to the new cluster ? > > Thanks a lot > > > > Kevin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Best practice
Is there a best practice when upgrading from ovirt 3.4 to 3.5? I upgraded my development server which is a self hosted server and I just did a regular update without putting anything under maintenance... Though my vm's where turned off. The migration from krb/ldap to AAA went smoothly and so far is running like a champ. I am about to do the same to our production cluster and was wondering if I must put all Hosts under maintenance before I do the upgrade? Thanks in advance for all/any tips from the mailing list! Regards, -- Fernando Fuentes Supervisor & Senior Systems Administrator Email: ffuen...@aasteel.com American Alloy Steel, Inc. Houston, Texas Website: http://www.aasteel.com Phone: 713-744-4222 Fax: 713-300-5688 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] upgrade path from Ovirt 3.5x to 3.6x
Thanks Michal, Your reply somehow answer to my question, but I still have some bits of fuzziness Hosted engine VM upgrade itself is almost clear (maintenance, db backup, reformat, reinstall, db import). However: 1. Inter-cluster migration is somehow unsupported. Should we worried about moving VM from EL6 cluster to EL7? You know... different vdsm etc is ovirt migration feature atomic and reversible (if something goes wrong during the process)? 2. More important: the only Ovirt version that supports both EL6.6 and EL7.1 is 3.53. my real question is: what happen as soon as I finish upgrade of my ovirt to version 3.54 (or 3.55) *while running HOSTS on EL6.6* ? I mean, there will be a point in time when a version mismatch/overlap will be unavoidable. For example: how people should correctly jump from version 3.53 (EL6.6) to 3.54 (EL6.7)? Ovirt team should support at least one or two previous version to support overlaps... Thanks AG *From:* Michal Skrivanek [mailto:michal.skriva...@redhat.com] *Sent:* Friday, October 09, 2015 1:33 PM *To:* Andrea Ghelardi *Cc:* users (users@ovirt.org); Simone Tiraboschi *Subject:* Re: [ovirt-users] upgrade path from Ovirt 3.5x to 3.6x nono, the upgrade is live and all your VMs can keep humming. But there are manual steps involved: Simone can probably update about Hosted Engine which would be somewhat special, but at least for regular VMs you can currently use: 1) create a new cluster with same settings as your existing cluster 2) remove one host from the old cluster (the one to be upgraded/reinstalled to EL7). While moving the host to maintenance your VMs will be migrated to other EL6 hosts in that cluster (make sure you have enough capacity to do that beforehand, of course:) 3) upgrade/reinstall the host with EL7. If youdon't have anything custom on the host reinstallation might be the best option 4) add it to the new cluster, installing/deploying it via UI, then it should come up - this is your EL7-based cluster now 5) manually migrate (Migrate To button) some of your VMs from the old cluster to the new one (there's an advanced section in the dialog, allowing you cross-cluster migrations 6) repeat until all your hosts and VMs are on EL7-based cluster, then you can decommission it, and perhaps rename the cluster back to its original name Thanks, michal ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] ovirt repos dependencies
Hello, I tried to install a single host with a custom repo synced by my own from official ovirt, gluster repos etc... (with katello). But installation always fail because otopi can't locate gluster packages, while those packages are available and can be yum installed on the host. So, what does otopi require more than gluster repo for a successfull installation? This limitation prevents from anyone to install a host from an other repo than official ones (in my case, using a katello subscription) -- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanc...@abes.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] ovirt repository in the foreman host provisionning process
Hello, In the foreman host provisionning process, should the "yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm"; be automatically executed in the same way as it is in a traditional way? it seems that whithout those repos, vdsm installation fails, even if the host had been previously registered with katello to a local ovirt repository. Could you please help me on that point? -- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanc...@abes.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] engine.log is looping with Volume XXX contains a apparently corrupt brick(s).
Le 2015-10-12 14:04, Nir Soffer a écrit : >> On Mon, Oct 12, 2015 at 11:14 AM, Nico wrote: > > Yes, engine will let you use such volume in 3.5 - this is a bug. In 3.6 you > will > not be able to use such setup. > > replica 2 fails in a very bad way when one brick is down; the > application may get > stale data, and this breaks sanlock. You will be get stuck with spm > that cannot be > stopped and other fun stuff. > > You don't want to go in this direction, and we will not be able to support > that. For the record, I already rebooted node1; and the node2 took over the existing VM from node 1 and vice-versa. GlusterFS worked fine, oVirt application was still working fine .. i guess it is because it was a soft reboot which stops softly the services. I got another case where i stuck the network on the 2 nodes simultaneously after a bad manipulation on oVirt GUI and i got a split brain. i kept the error at this very moment: root@devnix-virt-master02 nets]# gluster volume heal ovirt info split-brain Brick devnix-virt-master01:/gluster/ovirt/ /d44ee4b0-8d36-467a-9610-c682a618b698/dom_md/ids Number of entries in split-brain: 1 Brick devnix-virt-master02:/gluster/ovirt/ /d44ee4b0-8d36-467a-9610-c682a618b698/dom_md/ids Number of entries in split-brain: 1 This file was having same size on both nodes; so it was hard to select one. Finally i chose the younger one and all was back online after the heal. It is this kind of stuff you are talking about with 2 nodes ? For now, I don't have budget to take a third one; so i'm a bit stuck and disappointing. I've a third device but for backup, it has lot of storage but low cpu abilities (no VT-X) so i can't use it as hypervisor. I could maybe use it as a third brick but is it possible to have this kind of configuration ? 2 actives nodes as hypervisor and 1 third only for gluster replica 3 ? Cheers Nico ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Collectd with Ovirt
Do you see it in the REST API? Do you see it in the DWH database? Is the issue only in the reports? Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: yd...@redhat.com IRC : ydary On Mon, Oct 12, 2015 at 4:56 AM, Punit Dambiwal wrote: > Hi, > > The guest agent installed on the guest VM but still can not get the > network usages.. > > On Sun, Oct 11, 2015 at 10:28 PM, Yaniv Dary wrote: > >> >> >> Yaniv Dary >> Technical Product Manager >> Red Hat Israel Ltd. >> 34 Jerusalem Road >> Building A, 4th floor >> Ra'anana, Israel 4350109 >> >> Tel : +972 (9) 7692306 >> 8272306 >> Email: yd...@redhat.com >> IRC : ydary >> >> >> On Mon, Oct 5, 2015 at 4:31 AM, Punit Dambiwal wrote: >> >>> Hi Michal, >>> >>> Right now I am using DWH and Ovirt reports...for the guest vm cpu and >>> memory it's good and ok to use but for the network usages there is nothing >>> even the guest vm usages is too high...that's why i want to use collectd to >>> get the proper network usages graphs of guest VM's. >>> >> >> Did you install the guest agent on the VM? >> Maybe that is why you don't see the network data, since we do collect it. >> >> >>> >>> Thanks, >>> Punit >>> >>> On Fri, Oct 2, 2015 at 7:03 PM, Michal Skrivanek < >>> michal.skriva...@redhat.com> wrote: >>> On 2 Oct 2015, at 11:17, Punit Dambiwal wrote: Hi All, I want to use collectd (https://collectd.org/) to collect the guest VM usages in to graphs...as ovirt DWH and reports are not quite good...i want more good graphs and reporting of the usages... Please suggest me good way or tool to achieve this… Hi Punit, well…maybe you can use the DWH database? If it has the data you need you and you only want better graphs then the reporting package just works on top of those data…so you can use a different one instead If you need different data then you need to get them in some other way….least intrusive might be periodic calls to REST API to get what you need…but be careful as REST API has a reputation of being quite slow…. If you need something faster you would need to move your data gathering closer to the source, either directly from DB or directly form hypervisors. Obviously the closer you try to get the implementation is increasingly more difficult and trickier:) but if you're looking for host system performance data you would better do it over there…sysstat/sar or collectd…I would bypass oVirt's mechanisms and grab it myself, then perhaps correlate other data from REST API or DWH tables. e.g. correlating increased CPU/mem load on the host with the amount of VMs running on that host (well, that one you can do with oVirt's stats, depends if you need/want more low level stuff we don't have) HTH, michal Thanks, Punit ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users >>> >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] engine.log is looping with Volume XXX contains a apparently corrupt brick(s).
Le 2015-10-12 14:04, Nir Soffer a écrit : > Yes, engine will let you use such volume in 3.5 - this is a bug. In 3.6 you > will > not be able to use such setup. > > replica 2 fails in a very bad way when one brick is down; the > application may get > stale data, and this breaks sanlock. You will be get stuck with spm > that cannot be > stopped and other fun stuff. > > You don't want to go in this direction, and we will not be able to support > that. > >> here the last entries of vdsm.log > > We need the whole file. > > I suggest you file an ovirt bug and attach the full vdsm log file > showing the timeframe of > the error. Probably from the time you created the glusterfs domain. > > Nir Please find the full logs there: https://94.23.2.63/log_vdsm/vdsm.log [1] https://94.23.2.63/log_vdsm/ [2] https://94.23.2.63/log_engine/ [3] Links: -- [1] https://94.23.2.63/log_vdsm/vdsm.log [2] https://94.23.2.63/log_vdsm/ [3] https://94.23.2.63/log_engine/ ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] CEPH rbd support in EL7 libvirt
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 12/10/15 10:13, Nux! wrote: > Hi Nir, > > I have not tried to use Ovirt with Ceph, my question was about > libvirt and was directed to ask the question here, sorry for the > noise; I understand libvirt is not really ovirt's people concern. > > The thing is qemu can do ceph rbd in EL7, libvirt does not, > although support seems to be there and a simple rebuild enables > it. Was hoping you guys know more. > > Lucian I'd suggest to ask this on the virt-sig of centOS (cc'ed). Why is there no rbd support in el7 libvirt? I don't know, maybe the virt repo guys from centos can rebuild it (they already do rebuild libvirt afaik, another flag might not hurt them ). HTH - -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 29 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhaus en Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJWG6/UAAoJEMby9TMDAbQRCOkP/1aCsCIUvJSdCdsdAi/A/46H EZHFLq9vak/NOWI9WQ4hVEh9zgXgCwoMz1vEMDceFXT1QXqGz+zlwODH55RS73FP LZcEWFFDwmVljG2MWnlYIp0F25vO/LtpvkQv+gtq/2MHtJwEG5TKY0jWdKCkrMJK Hip3qgm36hghYE2U1HQ1wMOAj6m5xedtIker4uIDcLcC7fqEx7Kw5zbPF/Rna6en eFZMiOaMtThEGUWi2EqbnbObJlvWkr0vduG3Knzsnht4luromYrc7/5qmLt7zYkN 6KIASpCZ7gwsUTblGBlF1qfp3mHy1rzZlh163Lktyw1+bzmJHa62kF61JJ8ZFysJ uzp9UMqnl1DaGKh3Y8zwZL/z25hPFNaB4GkY3VOxCVPNuDtTNrbNH43ahaVQ4Urw zpQxoi37pU+MPTlC7QiIQaI6eNEpFaWeZwcr+OoEwZNs2FOJ4TYzTL7FGisNF1MP KEZN/vQipq8pk367bCS1y70fVlzeSfjns49PkbN/zaemRmv5Q8RtwL03V5b6o4on cSzJLg3SfXBkiPXvlemjm7W7kym74GILB+s/R4LebNiYzpZyRtcDHVjYIQcA6SWM E5vmwMLr6KGJxmF4Cw9qoQRNldXkMshZeIGgHAHaf47e5N4X3cGn8zgjtcLGbyhu TX/OrFrnROA40g441hFX =pyBp -END PGP SIGNATURE- ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] engine.log is looping with Volume XXX contains a apparently corrupt brick(s).
On Mon, Oct 12, 2015 at 11:14 AM, Nico wrote: > > > Le 2015-10-12 09:59, Nir Soffer a écrit : > > On Sun, Oct 11, 2015 at 6:43 PM, Nico wrote: > > Recently, i built a small oVirt platform with 2 dedicated servers and > GlusterFS to synch the VM storage. > Bricks: > > Brick1: ovirt01:/gluster/ovirt > > Brick2: ovirt02:/gluster/ovirt > > > This looks like replica 2 - this is not supported. > > You can use either replica 1 (testing) or replica 3 (production). > > But when i check /var/log/ovirt/engine.log on ovirt01, there are error in > loop every 2 seconds: > > To understand such error we need to see the vdsm log. > > Nir > > Yeah it is replica 2 as i've only 2 dedicated servers. > > why are you saying it is not supported ? Through oVirt GUI, it is possible > to create a Gluster Volume with 2 bricks in repllcate mode; i tried it also. Yes, engine will let you use such volume in 3.5 - this is a bug. In 3.6 you will not be able to use such setup. replica 2 fails in a very bad way when one brick is down; the application may get stale data, and this breaks sanlock. You will be get stuck with spm that cannot be stopped and other fun stuff. You don't want to go in this direction, and we will not be able to support that. > here the last entries of vdsm.log We need the whole file. I suggest you file an ovirt bug and attach the full vdsm log file showing the timeframe of the error. Probably from the time you created the glusterfs domain. Nir > > > > hread-167405::DEBUG::2015-10-12 > 10:12:20,132::stompReactor::163::yajsonrpc.StompServer::(send) Sending > response > Thread-55245::DEBUG::2015-10-12 > 10:12:22,529::task::595::Storage.TaskManager.Task::(_updateState) > Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::moving from state init -> state > preparing > Thread-55245::INFO::2015-10-12 > 10:12:22,530::logUtils::44::dispatcher::(wrapper) Run and protect: > getVolumeSize(sdUUID='d44ee4b0-8d36-467a-9610-c682a618b698', > spUUID='0ae7120a-430d-4534-9a7e-59c53fb2e804', > imgUUID='3454b077-297b-4b89-b8ce-a77f6ec5d22e', > volUUID='933da0b6-6a05-4e64-958a-e1c030cf5ddb', options=None) > Thread-55245::INFO::2015-10-12 > 10:12:22,535::logUtils::47::dispatcher::(wrapper) Run and protect: > getVolumeSize, Return response: {'truesize': '158983839744', 'apparentsize': > '161061273600'} > Thread-55245::DEBUG::2015-10-12 > 10:12:22,535::task::1191::Storage.TaskManager.Task::(prepare) > Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::finished: {'truesize': > '158983839744', 'apparentsize': '161061273600'} > Thread-55245::DEBUG::2015-10-12 > 10:12:22,535::task::595::Storage.TaskManager.Task::(_updateState) > Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::moving from state preparing -> > state finished > Thread-55245::DEBUG::2015-10-12 > 10:12:22,535::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-55245::DEBUG::2015-10-12 > 10:12:22,536::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-55245::DEBUG::2015-10-12 > 10:12:22,536::task::993::Storage.TaskManager.Task::(_decref) > Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::ref 0 aborting False > Thread-55245::DEBUG::2015-10-12 > 10:12:22,545::libvirtconnection::143::root::(wrapper) Unknown libvirterror: > ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata > element is not present > JsonRpc (StompReactor)::DEBUG::2015-10-12 > 10:12:23,138::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling > message > JsonRpcServer::DEBUG::2015-10-12 > 10:12:23,139::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting > for request > Thread-167406::DEBUG::2015-10-12 > 10:12:23,142::stompReactor::163::yajsonrpc.StompServer::(send) Sending > response > Thread-37810::DEBUG::2015-10-12 > 10:12:24,194::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd > if=/rhev/data-center/mnt/ovirt01:_data_iso/5aec30fa-be8b-4f4e-832e-eafb6fa4a8e0/dom_md/metadata > iflag=direct of=/dev/null bs=4096 count=1 (cwd None) > Thread-37810::DEBUG::2015-10-12 > 10:12:24,201::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS: > = '0+1 records in\n0+1 records out\n317 bytes (317 B) copied, > 0.000131729 s, 2.4 MB/s\n'; = 0 > JsonRpc (StompReactor)::DEBUG::2015-10-12 > 10:12:26,148::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling > message > JsonRpcServer::DEBUG::2015-10-12 > 10:12:26,149::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting > for request > Thread-167407::DEBUG::2015-10-12 > 10:12:26,151::stompReactor::163::yajsonrpc.StompServer::(send) Sending > response > VM Channels Listener::DEBUG::2015-10-12 > 10:12:26,972::vmchannels::96::vds::(_handle_timeouts) Timeout on fileno 35. > Thread-30::DEBUG::2015-10-12 > 10:12:28,358::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd > if=/rhev/data-center/mnt/glusterSD/localhost:_ovirt/d44ee4b0-8d36-467a-9610-c682a618b698/dom_md/metadata > iflag=direct of=/d
Re: [ovirt-users] how to remove orphaned image
Hi Jiri, Is the image located under the same storage pool the host is currently connected to? You can check the current connected storage pool by the following: # vdsClient -s 0 getConnectedStoragePoolsList On Fri, Oct 9, 2015 at 12:45 PM, Jiří Sléžka wrote: > Hello, > > I have some orphaned images on a storage domain which are not visible from > manager and I would like to remove them. > > I found one proposed feature which would be useful but seems not exists > yet - http://www.ovirt.org/Features/Orphaned_Images > > Also I found this feature http://www.ovirt.org/Features/Domain_Scan but > there is no documentation how to use it. > > Could you suggest me safe manual steps to remove an orphaned image? > > btw. I know all info about this image - sdUUID, spUUID, imgUUID, volUUID, > volume path, logical volume on which is stored,... I am using oVirt3.5.4 > > Thanks in advance, > > Jiri Slezka > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] CEPH rbd support in EL7 libvirt
Hi Nir, I have not tried to use Ovirt with Ceph, my question was about libvirt and was directed to ask the question here, sorry for the noise; I understand libvirt is not really ovirt's people concern. The thing is qemu can do ceph rbd in EL7, libvirt does not, although support seems to be there and a simple rebuild enables it. Was hoping you guys know more. Lucian -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro - Original Message - > From: "Nir Soffer" > To: "Nux!" > Cc: "users" > Sent: Monday, 12 October, 2015 09:05:00 > Subject: Re: [ovirt-users] CEPH rbd support in EL7 libvirt > On Sun, Oct 11, 2015 at 12:52 PM, Nux! wrote: >> Hi folks, >> >> I was directed here by Sandro with the question in the $subject. >> As I could not find anything conclusive in either bugzilla or the 7.2 release >> notes, can someone clarify this for me? >> At this point it's apparently as easy as rebuilding the libvirt src.rpm with >> "with_storage_rbd 1".[1] >> >> I see users migrating from CentOS to Ubuntu because this is missing, it's not >> even in technology preview. >> Kind of odd RH undermining their own projects in this way. >> >> [1] - >> http://blog.widodh.nl/2015/04/rebuilding-libvirt-under-centos-7-1-with-rbd-storage-pool-support/ > > RHEL 7.1 supports rbd out of the box; so should be current CentOS 7. > > We do not use libvirt storage pool for ovirt, so I don't think you > need to build anything. > > Also, we do not access rbd volumes via libvirt. When we run vms using > rbd: volumes, libvirt > pass the volume url to qemu, and qemu access the volume. So we may not > need any rbd > support in libvirt itself. > > Did you try to use ceph with ovirt 3.6 on CentOS? > > Nir ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] engine.log is looping with Volume XXX contains a apparently corrupt brick(s).
Le 2015-10-12 09:59, Nir Soffer a écrit : > On Sun, Oct 11, 2015 at 6:43 PM, Nico wrote: > >> Recently, i built a small oVirt platform with 2 dedicated servers and >> GlusterFS to synch the VM storage. >> Bricks: >> >> Brick1: ovirt01:/gluster/ovirt >> >> Brick2: ovirt02:/gluster/ovirt > > This looks like replica 2 - this is not supported. > > You can use either replica 1 (testing) or replica 3 (production). > >> But when i check /var/log/ovirt/engine.log on ovirt01, there are error in >> loop every 2 seconds: > To understand such error we need to see the vdsm log. > > Nir Yeah it is replica 2 as i've only 2 dedicated servers. why are you saying it is not supported ? Through oVirt GUI, it is possible to create a Gluster Volume with 2 bricks in repllcate mode; i tried it also. here the last entries of vdsm.log hread-167405::DEBUG::2015-10-12 10:12:20,132::stompReactor::163::yajsonrpc.StompServer::(send) Sending response Thread-55245::DEBUG::2015-10-12 10:12:22,529::task::595::Storage.TaskManager.Task::(_updateState) Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::moving from state init -> state preparing Thread-55245::INFO::2015-10-12 10:12:22,530::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='d44ee4b0-8d36-467a-9610-c682a618b698', spUUID='0ae7120a-430d-4534-9a7e-59c53fb2e804', imgUUID='3454b077-297b-4b89-b8ce-a77f6ec5d22e', volUUID='933da0b6-6a05-4e64-958a-e1c030cf5ddb', options=None) Thread-55245::INFO::2015-10-12 10:12:22,535::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '158983839744', 'apparentsize': '161061273600'} Thread-55245::DEBUG::2015-10-12 10:12:22,535::task::1191::Storage.TaskManager.Task::(prepare) Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::finished: {'truesize': '158983839744', 'apparentsize': '161061273600'} Thread-55245::DEBUG::2015-10-12 10:12:22,535::task::595::Storage.TaskManager.Task::(_updateState) Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::moving from state preparing -> state finished Thread-55245::DEBUG::2015-10-12 10:12:22,535::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-55245::DEBUG::2015-10-12 10:12:22,536::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-55245::DEBUG::2015-10-12 10:12:22,536::task::993::Storage.TaskManager.Task::(_decref) Task=`c887acfa-bd10-4dfb-9374-da607c133e68`::ref 0 aborting False Thread-55245::DEBUG::2015-10-12 10:12:22,545::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata element is not present JsonRpc (StompReactor)::DEBUG::2015-10-12 10:12:23,138::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message JsonRpcServer::DEBUG::2015-10-12 10:12:23,139::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-167406::DEBUG::2015-10-12 10:12:23,142::stompReactor::163::yajsonrpc.StompServer::(send) Sending response Thread-37810::DEBUG::2015-10-12 10:12:24,194::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd if=/rhev/data-center/mnt/ovirt01:_data_iso/5aec30fa-be8b-4f4e-832e-eafb6fa4a8e0/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-37810::DEBUG::2015-10-12 10:12:24,201::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS: = '0+1 records in\n0+1 records out\n317 bytes (317 B) copied, 0.000131729 s, 2.4 MB/s\n'; = 0 JsonRpc (StompReactor)::DEBUG::2015-10-12 10:12:26,148::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message JsonRpcServer::DEBUG::2015-10-12 10:12:26,149::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-167407::DEBUG::2015-10-12 10:12:26,151::stompReactor::163::yajsonrpc.StompServer::(send) Sending response VM Channels Listener::DEBUG::2015-10-12 10:12:26,972::vmchannels::96::vds::(_handle_timeouts) Timeout on fileno 35. Thread-30::DEBUG::2015-10-12 10:12:28,358::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd if=/rhev/data-center/mnt/glusterSD/localhost:_ovirt/d44ee4b0-8d36-467a-9610-c682a618b698/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-30::DEBUG::2015-10-12 10:12:28,451::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS: = '0+1 records in\n0+1 records out\n470 bytes (470 B) copied, 0.000152738 s, 3.1 MB/s\n'; = 0 JsonRpc (StompReactor)::DEBUG::2015-10-12 10:12:29,157::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message JsonRpcServer::DEBUG::2015-10-12 10:12:29,252::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-167408::DEBUG::2015-10-12 10:12:29,254::stompReactor::163::yajsonrpc.StompServer::(send) Sending response JsonRpc (StompReactor)::DEBUG::2015-10-12 10:12:32,260::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message JsonRpcServer::DEBUG::2015-10-12 10:12:32,262
Re: [ovirt-users] CEPH rbd support in EL7 libvirt
On Sun, Oct 11, 2015 at 12:52 PM, Nux! wrote: > Hi folks, > > I was directed here by Sandro with the question in the $subject. > As I could not find anything conclusive in either bugzilla or the 7.2 release > notes, can someone clarify this for me? > At this point it's apparently as easy as rebuilding the libvirt src.rpm with > "with_storage_rbd 1".[1] > > I see users migrating from CentOS to Ubuntu because this is missing, it's not > even in technology preview. > Kind of odd RH undermining their own projects in this way. > > [1] - > http://blog.widodh.nl/2015/04/rebuilding-libvirt-under-centos-7-1-with-rbd-storage-pool-support/ RHEL 7.1 supports rbd out of the box; so should be current CentOS 7. We do not use libvirt storage pool for ovirt, so I don't think you need to build anything. Also, we do not access rbd volumes via libvirt. When we run vms using rbd: volumes, libvirt pass the volume url to qemu, and qemu access the volume. So we may not need any rbd support in libvirt itself. Did you try to use ceph with ovirt 3.6 on CentOS? Nir ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] engine.log is looping with Volume XXX contains a apparently corrupt brick(s).
On Sun, Oct 11, 2015 at 6:43 PM, Nico wrote: > Recently, i built a small oVirt platform with 2 dedicated servers and > GlusterFS to synch the VM storage. > Bricks: > > Brick1: ovirt01:/gluster/ovirt > > Brick2: ovirt02:/gluster/ovirt This looks like replica 2 - this is not supported. You can use either replica 1 (testing) or replica 3 (production). > But when i check /var/log/ovirt/engine.log on ovirt01, there are error in > loop every 2 seconds: To understand such error we need to see the vdsm log. Nir ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt Engine redundant ?
Le 2015-10-12 09:09, Barak Korren a écrit : >> What that procedure describes is backing up an existing engine, >> installing a new one and then restoring the backed up data into it. >> This was probably written to describe migration from a stand-alone >> engine host, not an AllInOne. >> Theoretically this should work for your setup, but I am not sure if >> the new hosted engine will be able to properly use the AllInOne node >> as a hypervisor (it would probably depend on the copied configuration >> containing enough details to allow the engine to connect to it over >> the network. you will probably at the very last have to shut down the >> existing engine before starting up the hosted engine) >> I would suggest doing as many backups as you can before starting, and >> performing the H.E. setup on a host that wasn't used by the existing >> engine, that way if it fails you can just shut it down and bring your >> old engine back up. I got a backup running at this moment, scping /gluster/ovirt/d44ee4b0-8d36-467a-9610-c682a618b698/images/ to a third device. So, once done, i'll give a shot on the node2 which is running only VDSM as Host agent. Hope all will be fine ! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt Engine redundant ?
On 12 October 2015 at 09:52, Nico wrote: > Le 2015-10-12 05:37, Julian De Marchi a écrit : > > > The oVirt hosted-engine will do what you want. Have a read of the below. > > http://www.ovirt.org/Migrate_to_Hosted_Engine > > --julian > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > Thanks for your quick reply. > > I m going to follow the step described in this page. > > The first action is to run > > # hosted-engine --deploy > > INFO ] Stage: Initializing > > Continuing will configure this host for serving as hypervisor and > create a VM where oVirt Engine will be installed afterwards. > > Are you sure you want to continue? (Yes, No)[Yes]: > > In my case; i’ve already an existing install on this node (AllinOne); will > it be ok ? nothing will be break or overriden ?. > > What that procedure describes is backing up an existing engine, installing a new one and then restoring the backed up data into it. This was probably written to describe migration from a stand-alone engine host, not an AllInOne. Theoretically this should work for your setup, but I am not sure if the new hosted engine will be able to properly use the AllInOne node as a hypervisor (it would probably depend on the copied configuration containing enough details to allow the engine to connect to it over the network. you will probably at the very last have to shut down the existing engine before starting up the hosted engine) I would suggest doing as many backups as you can before starting, and performing the H.E. setup on a host that wasn't used by the existing engine, that way if it fails you can just shut it down and bring your old engine back up. > > Thanks > > Regards > > Nico > > > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Barak Korren bkor...@redhat.com RHEV-CI Team ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users