Actually I have vdsm-gluster, that is why vdsm tries to find the gluster
packages. Is there a way I can run the vdsm gluster rpm search manually to
see what is going wrong?
[root@virt01a ~]# yum list installed |grep vdsm
vdsm.x86_644.14.9-0.el6 @ovirt-3.4-stable
vdsm-cli.noarch
Thanks David!
I think I am gonna go for a combination of the two approaches: have the
client retrieve the IP of the guest and connect to a service running on the
guest and once the application running inside the guest closes, notify the
client to close the SPICE connection and log off the user.
Σ
Hi,
On Čt, 2014-06-19 at 23:41 +0300, cyber python wrote:
> Hello everyone!
>
>
>
> I currently have an MS Windows client running remote viewer which
> connects to an oVirt Linux VM using SPICE.
>
> My applicaiton is running on the VM.
>
>
> (Actually there are several clients each connecti
Le 20/06/2014 15:27, Daniel Helgenberger a écrit :
@Nicolas, keep in mind system performance of the engine not only affects
the UI; but also various processes in the background.
Also, if you plan on using the VDI and self-deployment features ("User
Access") a slow UI might be very counterproducti
I run oVirt Node in a VM with virt-manager in Fedora 20. Works perfectly for me
:)
Have you tried virt-manager?
Two tips:
1. enable nested kvm on the bare metal (L0)
2. pass through the bare metal's CPU flags to the VM.
See here for examples of these tips:
http://dustymabe.com/2013/10/21/neste
great!
this is an old bug that should have been fixed so I think you are using
older versions of vdsm/libvirt and qemu.
On 06/20/2014 02:45 PM, Alexandr Krivulya wrote:
Thanks, detaching CD solves this problem.
20.06.2014 16:34, Dafna Ron пишет:
vm qemu vm log shows an error on dst:
qemu:
Thanks, detaching CD solves this problem.
20.06.2014 16:34, Dafna Ron пишет:
> vm qemu vm log shows an error on dst:
>
> qemu: warning: error while loading state section id 3
> load of migration failed
>
> but I think that the issue is that the vm has a cd attached which no
> longer exists or avai
Am 20.06.2014 15:32, schrieb Vinzenz Feenstra:
> Isn't that section referring to HA VMs?
No.
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald
Hello Ovirt team,
Reading this bulletin: https://access.redhat.com/site/solutions/117763
there is a reference to 'private Red Hat Bug # 523354' covering online
backups of VM's.
Can someone comment on this feature, and rough timeline? Is this a native
backup solution that will be included with Ovi
vm qemu vm log shows an error on dst:
qemu: warning: error while loading state section id 3
load of migration failed
but I think that the issue is that the vm has a cd attached which no
longer exists or available to the vm.
can you please try to detach any disk attached, activate the iso doma
Hello Sven,
thanks:
On Fr, 2014-06-20 at 11:30 +, Sven Kieske wrote:
>
>
> Am 20.06.2014 13:12, schrieb Daniel Helgenberger:
> > here free -m on my ovirtengine:
> >
> > ssh r...@ovirtengine.lab.mbox.loc free -m
> > total used free sharedbuffers cached
>
On 06/20/2014 02:52 PM, Sven Kieske wrote:
Am 20.06.2014 14:19, schrieb Dan Kenigsberg:
the host was not fenced, the vms where fenced.
here is a link to the documentation which should explain what I mean:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3
Dne 20.6.2014 0:19, Alon Bar-Lev napsal(a):
- Original Message -
From: "Moti Asayag" To: "Jiří Sléžka"
, "Alon Bar-Lev" Cc:
users@ovirt.org Sent: Friday, June 20, 2014 1:12:58 AM Subject: Re:
[ovirt-users] host upgrade from ovirt manager and custom iptables
rules
- Original Me
Am 20.06.2014 14:19, schrieb Dan Kenigsberg:
>> the host was not fenced, the vms where fenced.
>>
>> here is a link to the documentation which should explain what I mean:
>>
>> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html/Administration_Guide/Virtu
For those of you with memberships to Stack Overflow, the site is hosting free
vote-based advertisements for getting contributors for open source projects.
The program will feed live remnant ads on Stack Overflow if an ad gets enough
votes.
oVirt has had an ad created [1] (thanks to Tuomas Kuosm
On Fri, Jun 20, 2014 at 09:53:24AM +, Sven Kieske wrote:
>
>
> Am 20.06.2014 11:34, schrieb Dan Kenigsberg:
> >
> > I do not quite understand the problem you describe.
>
> See below, I hope this clears some things up.
>
> > Does the problem go away if you set your network to "non-required"
you might have a problem and the migration got stuck - increasing
timeout will not solve anything
please attach both src and dst vdsm, libvirt and vm qemu logs + engine log
Thanks,
Dafna
On 06/20/2014 01:00 PM, Alexandr Krivulya wrote:
Hi!
How can I adjust migration timeout? I see this error
Hi!
How can I adjust migration timeout? I see this error in my vdsm.log when
I try to migrate one of my VM's:
The migration took 130 seconds which is exceeding the configured maximum
time for migrations of 128 seconds. The migration will be aborted.
___
Am 20.06.2014 13:12, schrieb Daniel Helgenberger:
> here free -m on my ovirtengine:
>
> ssh r...@ovirtengine.lab.mbox.loc free -m
> total used free sharedbuffers cached
> Mem: 3830 2689 1141 0151669
> -/+ buffers/
Hello Sven,
here free -m on my ovirtengine:
ssh r...@ovirtengine.lab.mbox.loc free -m
total used free sharedbuffers cached
Mem: 3830 2689 1141 0151669
-/+ buffers/cache: 1868 1961
Swap: 2047
Am 20.06.2014 12:15, schrieb Daniel Helgenberger:
> You should keep in mind the 4GB min memory requirement for oVirt engine.
> Better go for 8GB in a bare metal system if you like to have more
> headroom for future upgrades.
There is ongoing discussion on this ML and in some BZs why this
requireme
Hello Nicolas,
I still consider myself new to oVirt but since I faced the same question
during my initial research I try to answer.
Major Block: AFAIK there are no i686 builds of oVirt, they make no sense
anyway. So you will need a 64bit machine wich rules out any of the very
old stuff.
Minor bl
Hello John,
thanks for your hint.
Just for understanding, step by step installation means basically a
minimal redhat / centos install and then adding the ovirt repo and run
>>yum install vdsm vdsm-cli<< like described here:
http://www.ovirt.org/Installing_VDSM_from_rpm
or?
Regards
Am 20.06.20
Am 20.06.2014 11:34, schrieb Dan Kenigsberg:
>
> I do not quite understand the problem you describe.
See below, I hope this clears some things up.
> Does the problem go away if you set your network to "non-required"? If
> your VM's app does not strictly require unintermitant networking, just
>
Hi,
I'd like to setup another oVirt TEST datacenter, and amongst the
machines I may use, I'd like to dedicate the best ones to the hosts, and
the worst to the manager.
I don't want to use hosting.
I really don't mind if it takes me two hours by click in the java/html
web GUI.
What could be
On Fri, Jun 20, 2014 at 08:41:21AM +, Sven Kieske wrote:
> Hi,
>
> please consider this bugreport:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=520
>
> it is about the default fencing behaviour of ovirt
> in the special case of a single node cluster when
> a logical network marked as
Am 17.06.2014 11:59, schrieb Dan Kenigsberg:
> On Tue, Jun 17, 2014 at 09:40:08AM +0800, John Xue wrote:
>> Yes, it is possible an agent bug.
>> #cat /etc/sudoers.d/50_ovirt-guest-agent
>> Cmnd_Alias OVIRTAGENT_SCRIPTS =\
>> /usr/share/ovirt-guest-agent/ovirt-hibernate-wrapper.sh *,\
>> /
Dne 20.6.2014 7:34, Moti Asayag napsal(a):
- Original Message -
From: "Alon Bar-Lev"
To: "Moti Asayag"
Cc: "Jiří Sléžka" , users@ovirt.org
Sent: Friday, June 20, 2014 1:19:25 AM
Subject: Re: [ovirt-users] host upgrade from ovirt manager and custom iptables
rules
- Original Me
Hi,
please consider this bugreport:
https://bugzilla.redhat.com/show_bug.cgi?id=520
it is about the default fencing behaviour of ovirt
in the special case of a single node cluster when
a logical network marked as "required" is not available
on the single host in the cluster.
the default beh
Ok, I have listed my databases:
postgres-# \list
List of databases
Name | Owner | Encoding | Collation |
Ctype| Access privileges
---+---+--+-
30 matches
Mail list logo