Hi
We use 3.1 on centos 6 and have the need to migrate the engine due to hardware
issues. Looking at
http://wiki.ovirt.org/Backup_engine_db
Are there any other files apart from the db that might be needed or once the db
is migrated into a freshly installed engine with the same IP and hostname
> Hello Jonathan,
>
> I believe you can use the Red Hat Documentation for this.
>
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.1/html/Evaluation_Guide/Evaluation_Guide-VDI.html#Evaluation_Guide-Add_Active_Directory
>
> One of the "gotchas" that I ra
>>
>>
>> I believe this also answers my other thread, that is is ok to reboot the
>> management server at any time? (I have another thread about how sluggish
>> mine is running lately, and id like to get it rebooted soon before I go
>> on vacation).
>>
dont reboot the whole server - just restar
>
>>
>> I have a couple of old DL380 G5's and i am putting them into their own
>> cluster for testing various things out.
>> The install of 3.1 from dreyou goes fine onto them but when they try to
>> activate i get the following
>>
>> Host xxx.xxx.net.uk moved to Non-Operational state as host do
>>
>
> Hi,
> how about the kvm-ok tool result? Is it responding:
>
> INFO: /dev/kvm exists
> KVM acceleration can be used
>
> Alex
thanks - but for the life of me i cant seem to find that on my filesystem nor
is google telling me what package in CentOS 6 that comes from -
Do you have any id
>> I have a couple of old DL380 G5's and i am putting them into their own
>> cluster for testing various things out.
>> The install of 3.1 from dreyou goes fine onto them but when they try to
>> activate i get the following
>>
>> Host xxx.xxx.net.uk moved to Non-Operational state as host does n
>
>
>
> I have a couple of old DL380 G5's and i am putting them into their own
> cluster for testing various things out.
> The install of 3.1 from dreyou goes fine onto them but when they try to
> activate i get the following
>
> Host xxx.xxx.net.uk moved to Non-Operational state as host doe
Hi
I have a couple of old DL380 G5's and i am putting them into their own cluster
for testing various things out.
The install of 3.1 from dreyou goes fine onto them but when they try to
activate i get the following
Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the
cl
>
> so…I guess - either do not build it with -rhev(regular upstream/fedora
> packages are without the suffix) or build vdsm with the corresponding
> modification in caps.py
>
>
yes - i am not sure why those packages are in dreyou's repo, they weren't there
a couple of weeks ago when the last
> Tom,
> you should have qemu-kvm on Fedora, CentOS and such
> qemu-kvm-rhev on RHEL hosts are supposed to work with downstream RHEV
> product. How did you get them there?
> You can modify that if you want in vdsm/caps.py in _getKeyPackages() function
http://www.dreyou.org/ovirt/vdsm/Packages/
86_64
non working
node003 ~]# rpm -qa | grep kvm
qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64
cheers
> Hi,
> so what is the OS on each server and what package is actually installed there?
> ie. the rpm name
>
> Thanks,
> michal
>
&g
>
> You should add another network fior this to the DC & attach it to the
> cluster(s) where the VMs needing it are defined.
> At this point you should be able to create virtual NICs in the VMs that use
> this network.
> Then you should set it up on the hosts that have this NIC.
>
> If not all
Hi
I am setting up another DC, that is managed by my sole management
node, and this DC will have a requirement that the VM's will need an
additional storage NIC. This NIC is for NFS/CIFS traffic and is
independent of the oVirt VM's disks.
I have cabled the additional physical NIC in the HV's as t
> I think that it's a typo and the right command is: vdsClient .
> The second command doesn't have the typo :).
>
Working node
node002 ~]# vdsClient -s 0 getVdsCaps && vdsClient -s 0 getVdsStats >
/tmp/node002.log
HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:3
> please issue the following commands on both hv's:
>
> vdsClinet -s 0 getVdsCaps && vdsClient -s 0 getVdsStats
>
> I would like to make sure vdsm is indeed report them to the engine.
>
i appear to not have that command in my PATH on either node -
Which package provides it?
node002 ~]# yu
Hi
I have just added another HV to a cluster and its up and running fine. I can
run VM's on it and migrate fro other HV's onto it. I do note however that in
the manager there is no KVM version listed as installed however on other HV's
in the cluster there is a version present.
I see that the K
>>
>
> cdroms is a sub-collection of vm, you have to:
>
> 1. create vm
> 2. insert cd in to cdrom (this is done via updating cdrom)
> 3. run vm specifying CD as first boot device
>
based on my previous posts i dont suppose you have a working example do you as
i am having great difficulty maki
On 17 Jan 2013, at 11:31, Juan Jose wrote:
> Hello everybody,
>
> I have been able to solve my NFS problems and now I have configured ISO
> resource and a data resource in datacenter but when I try to execute the
> command "engine-iso-uploader" I can see below error:
>
> [root@ovirt-engine ~]
>>
>
> thankyou - i am however finding it hard to attach an iso image to a VM - I
> have the console etc sorted however what would be required to do the same
> with an iso image?
>
> The below is working apart from the iso
>
> thanks
>
> vm_template = """
> %s
>
> Default
>
>
> Blank
>
> 1. you can find elements names in api schema (download it from [1] on your
> rhevm server).
> 2. vm creation details/examples can be found at [2].
>
> [1] http://server:[port]/api?schema
> [2]
> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.1/pdf/Develo
>
>>
>>> true. until we support mixed storage.
>>
>> is that an item on the roadmap?
>>
>
> yes. usually dubbed "SDM"[1], but has several milestones needed before we get
> there.
> [1] granularity of storage domain, rather than SPM - pool level granualrity.
>
Great to hear - any clues on a
> true. until we support mixed storage.
is that an item on the roadmap?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
>
> the name of the element is (not ).
>
>
Many thanks - got it
Can i also ask is it possible to set an iso to boot from, the boot order and
also the console type and if so what are those elements called?
cheers
___
Users mailing list
Users@ovir
> you still can't mix different types of storage (local/iscsi) in same DC, so
> you need to add another DC for the hosts which will use the iscsi storage.
i can understand this for a cluster but not for a dc - What is the
reason to stop having different cluster types within a DC ?
thanks
Trying to get going adding VM's via the API and so far have managed to get
quite far - I am however facing this
vm_template = """
%s
Default
Blank
server
536870912
"""
The VM is created but the type ends up being a desktop and not a server -
What did i do wrong?
thanks
__
Hi - just enabled mail out from my HV's and i am seeing this
/etc/cron.hourly/vdsm-logrotate:
error: /etc/logrotate.d/vdsm:18 unknown option 'su' -- ignoring line
error: /etc/logrotate.d/vdsm:18 unexpected text
I use dreyou however i am doubtful he created this file
[root@ovirt-node002 ~]# rpm
>> libvirtError: internal error Process exited while reading console log outpu
> could this be related to selinux? can you try disabling it and see if
> migration succeeds?
It was indeed the case! my src node was set to disabled and my destination node
was enforcing, this was due to the destin
r/share/vdsm/vm.py", line 245, in run
>self._startUnderlyingMigration()
> File "/usr/share/vdsm/libvirtvm.py", line 474, in _startUnderlyingMigration
>None, maxBandwidth)
> File "/usr/share/vdsm/libvirtvm.py", line 510, in f
>ret = attr(*args, **kwargs
>
>
>
> I have in the past imported iso's without issue however since returning from
> the holiday break i tried today and got this
>
> [root@ovirt-manager ~]# engine-iso-uploader -v -i iso upload openfiler.iso
> Please provide the REST API password for the admin@internal oVirt Engine user
Hi
I have in the past imported iso's without issue however since returning from
the holiday break i tried today and got this
[root@ovirt-manager ~]# engine-iso-uploader -v -i iso upload openfiler.iso
Please provide the REST API password for the admin@internal oVirt Engine user
(CTRL+D to abort
> - Original Message -
>> From: "Tom Brown"
>> To: "users@oVirt.org"
>> Sent: Thursday, January 3, 2013 7:47:00 PM
>> Subject: Re: [Users] What do you want to see in oVirt next?
>>
>> I'd like to see a spice client for
>>
>> For me, I'd like to see official rpms for RHEL6/CentOS6. According to
>> the traffic on this list quite a lot are using Dreyou's packages.
>
> I'm going to second this strongly! Official support would be very much
> appreciated. Bonus points for supporting a migration from the dreyou
> p
> I am running fedora 17 and was able to get ovirt running on the machine but I
> am unable to add that same machine as a 'host'
>
> I am sure I am missing something simple...
>
>
> 2013-01-03 08:05:15,821 INFO [org.ovirt.engine.core.bll.VdsInstaller]
> (pool-3-thread-15) [435d4bf] Instal
> interesting, please search for migrationCreate command on desination host and
> search for ERROR afterwords, what do you see?
>
> - Original Message -
>> From: "Tom Brown"
>> To: users@ovirt.org
>> Sent: Thursday, January 3, 2013 4:12:
Hi
I seem to have an issue with a single VM and migration, other VM's can migrate
OK - When migrating from the GUI it appears to just hang but in the engine.log
i see the following
2013-01-03 14:03:10,359 INFO [org.ovirt.engine.core.bll.VdsSelector]
(ajp--0.0.0.0-8009-59) Checking for a spec
35 matches
Mail list logo