Re: [Users] Keepalived on oVirt Hosts has engine networking issues

2013-12-01 Thread Assaf Muller
Could you please attach the output of:
vdsClient -s 0 getVdsCaps
(Or without the -s, whichever works)
And:
ip a

On both hosts?
You seem to have made changes since the documentation on the link you provided, 
like separating the management and storage via VLANs on eth0. Any other changes?


Assaf Muller, Cloud Networking Engineer 
Red Hat 

- Original Message -
From: Andrew Lau and...@andrewklau.com
To: users users@ovirt.org
Sent: Sunday, December 1, 2013 4:55:32 AM
Subject: [Users] Keepalived on oVirt Hosts has engine networking issues

Hi, 

I have the scenario where I have gluster and ovirt hosts on the same box, to 
keep the gluster volumes highly available incase a box drops I'm using 
keepalived across the boxes and using that IP as the means for the storage 
domain. I documented my setup here in case anyone needs a little more info 
http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/ 

However, the engine seems to be picking up the floating IP assigned to 
keepalived as the interface and messing with the ovirtmgmt migration network, 
so migrations are failing as my floating IP gets assigned to the ovirtmgmt 
bridge in the engine however it's not actually there on most hosts (except one) 
so vdsm seems to report destination same as source. 

I've since created a new vlan interface just for storage to avoid the ovirtmgmt 
conflict, but the engine will still pick up the wrong IP on the storage vlan 
because of keepalived. This means I can't use the save network feature within 
the engine as it'll save the floating ip rather than the one already there. Is 
this a bug or just the way it's designed. 

eth0.2 - ovirtmgmt (172.16.0.11) - management and migration network - engine 
sees, sets and saves 172.16.0.11 
eth0.3 - storagenetwork (172.16.1.11) - gluster network - engine sees, sets 
and saves 172.16.1.5 (my floating IP) 

I hope this makes sense. 

p.s. can anyone also confirm, does gluster support multi pathing by default? If 
I'm using this keepalived method, am I bottle necking myself to one host? 

Thanks, 
Andrew 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Keepalived on oVirt Hosts has engine networking issues

2013-12-01 Thread Samuli Heinonen

Andrew Lau and...@andrewklau.com kirjoitti 1.12.2013 kello 4.55:

 p.s. can anyone also confirm, does gluster support multi pathing by default? 
 If I'm using this keepalived method, am I bottle necking myself to one host?

If you are using native GlusterFS, either FUSE or libgfapi, then client only 
fetches volume information from server with floating IP. Client connects 
directly to servers defined in volume specification. If you are running Gluster 
daemon on every oVirt node, then you could also use localhost:/glusterVol to 
mount your GlusterFS volume. Of course this is not good way to go if you ever 
plan to add a node without Gluster daemon.

So, the answer is: you are not bottlenecking to one host if you are using 
native GlusterFS. With NFS it's connecting to one host only.

-samuli

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirtmgmt not installed

2013-12-01 Thread Pascal Jakobi
Mike

Here you go. However, please note that I must investigate the connection
issue that Alon saw. will do it tomorrow.
Many thanks to you folks.
P

[root@lab2 ~]# vdsClient -s 0 getVdsCaps
HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:eea139a8'}]}
ISCSIInitiatorName = 'iqn.1994-05.com.redhat:eea139a8'
bondings = {'bond0': {'addr': '',
  'cfg': {},
  'hwaddr': 'fa:7e:79:56:5a:c2',
  'ipv6addrs': [],
  'mtu': '1500',
  'netmask': '',
  'slaves': []}}
bridges = {}
clusterLevels = ['3.0', '3.1', '3.2', '3.3']
cpuCores = '4'
cpuFlags =
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270,model_SandyBridge'
cpuModel = 'Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz'
cpuSockets = '1'
cpuSpeed = '3744.000'
cpuThreads = '8'
emulatedMachines = ['pc',
'q35',
'isapc',
'pc-0.10',
'pc-0.11',
'pc-0.12',
'pc-0.13',
'pc-0.14',
'pc-0.15',
'pc-1.0',
'pc-1.1',
'pc-1.2',
'pc-1.3',
'none']
guestOverhead = '65'
hooks = {}
kvmEnabled = 'false'
lastClient = '192.168.1.41'
lastClientIface = 'em1'
management_ip = '0.0.0.0'
memSize = '16001'
netConfigDirty = 'False'
networks = {}
nics = {'em1': {'addr': '192.168.1.42',
'cfg': {},
'hwaddr': '00:1a:6b:51:de:b4',
'ipv6addrs': ['fe80::21a:6bff:fe51:deb4/64'],
'mtu': '1500',
'netmask': '255.255.255.0',
'speed': 100}}
operatingSystem = {'name': 'Fedora', 'release': '2', 'version': '19'}
packages2 = {'kernel': {'buildtime': 1384978944.0,
'release': '200.fc19.x86_64',
'version': '3.11.9'},
 'libvirt': {'buildtime': 1384730741,
 'release': '2.fc19',
 'version': '1.0.5.7'},
 'mom': {'buildtime': 1375215820, 'release': '3.fc19',
'version': '0.3.2'},
 'qemu-img': {'buildtime': 1383700301,
  'release': '13.fc19',
  'version': '1.4.2'},
 'qemu-kvm': {'buildtime': 1383700301,
  'release': '13.fc19',
  'version': '1.4.2'},
 'spice-server': {'buildtime': 1383130020,
  'release': '3.fc19',
  'version': '0.12.4'},
 'vdsm': {'buildtime': 1384274283, 'release': '11.fc19',
'version': '4.13.0'}}
reservedMem = '321'
software_revision = '11'
software_version = '4.13'
supportedENGINEs = ['3.0', '3.1', '3.2', '3.3']
supportedProtocols = ['2.2', '2.3']
uuid = '0A583269-811F-E211-AA06-001A6B51DEB4'
version_name = 'Snow Man'
vlans = {}
vmTypes = ['kvm']



2013/12/1 Mike Kolesnik mkole...@redhat.com

 --

 *Hi there *


 Hi Pascal,




 *I installed a console on F19, then a F19 host (time 11:09 today).*

 *Everything works fine, apart from the installation of the mgmt network at
 the end. *
 *Can someone tell me what's going wrong ?*


 Can you please send the output of vdsCaps from the host (vdsClient -s 0
 getVdsCaps)?


 *Thxs in advance*

 *Pascal*





-- 
*Pascal Jakobi*
116 rue de Stalingrad
93100 Montreuil, France

*+33 6 87 47 58 19*pascal.jak...@gmail.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirtmgmt not installed

2013-12-01 Thread Assaf Muller
Did you install VDSM from nightly or stable?

Assaf Muller, Cloud Networking Engineer 
Red Hat 

- Original Message -
From: Pascal Jakobi pascal.jak...@gmail.com
To: Mike Kolesnik mkole...@redhat.com
Cc: users@ovirt.org
Sent: Sunday, December 1, 2013 10:58:40 AM
Subject: Re: [Users] ovirtmgmt not installed

Mike 

Here you go. However, please note that I must investigate the connection issue 
that Alon saw. will do it tomorrow. 
Many thanks to you folks. 
P 

[root@lab2 ~]# vdsClient -s 0 getVdsCaps 
HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName': 
'iqn.1994-05.com.redhat:eea139a8'}]} 
ISCSIInitiatorName = 'iqn.1994-05.com.redhat:eea139a8' 
bondings = {'bond0': {'addr': '', 
'cfg': {}, 
'hwaddr': 'fa:7e:79:56:5a:c2', 
'ipv6addrs': [], 
'mtu': '1500', 
'netmask': '', 
'slaves': []}} 
bridges = {} 
clusterLevels = ['3.0', '3.1', '3.2', '3.3'] 
cpuCores = '4' 
cpuFlags = 
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270,model_SandyBridge'
 
cpuModel = 'Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz' 
cpuSockets = '1' 
cpuSpeed = '3744.000' 
cpuThreads = '8' 
emulatedMachines = ['pc', 
'q35', 
'isapc', 
'pc-0.10', 
'pc-0.11', 
'pc-0.12', 
'pc-0.13', 
'pc-0.14', 
'pc-0.15', 
'pc-1.0', 
'pc-1.1', 
'pc-1.2', 
'pc-1.3', 
'none'] 
guestOverhead = '65' 
hooks = {} 
kvmEnabled = 'false' 
lastClient = '192.168.1.41' 
lastClientIface = 'em1' 
management_ip = '0.0.0.0' 
memSize = '16001' 
netConfigDirty = 'False' 
networks = {} 
nics = {'em1': {'addr': '192.168.1.42', 
'cfg': {}, 
'hwaddr': '00:1a:6b:51:de:b4', 
'ipv6addrs': ['fe80::21a:6bff:fe51:deb4/64'], 
'mtu': '1500', 
'netmask': '255.255.255.0', 
'speed': 100}} 
operatingSystem = {'name': 'Fedora', 'release': '2', 'version': '19'} 
packages2 = {'kernel': {'buildtime': 1384978944.0, 
'release': '200.fc19.x86_64', 
'version': '3.11.9'}, 
'libvirt': {'buildtime': 1384730741, 
'release': '2.fc19', 
'version': '1.0.5.7'}, 
'mom': {'buildtime': 1375215820, 'release': '3.fc19', 'version': '0.3.2'}, 
'qemu-img': {'buildtime': 1383700301, 
'release': '13.fc19', 
'version': '1.4.2'}, 
'qemu-kvm': {'buildtime': 1383700301, 
'release': '13.fc19', 
'version': '1.4.2'}, 
'spice-server': {'buildtime': 1383130020, 
'release': '3.fc19', 
'version': '0.12.4'}, 
'vdsm': {'buildtime': 1384274283, 'release': '11.fc19', 'version': '4.13.0'}} 
reservedMem = '321' 
software_revision = '11' 
software_version = '4.13' 
supportedENGINEs = ['3.0', '3.1', '3.2', '3.3'] 
supportedProtocols = ['2.2', '2.3'] 
uuid = '0A583269-811F-E211-AA06-001A6B51DEB4' 
version_name = 'Snow Man' 
vlans = {} 
vmTypes = ['kvm'] 



2013/12/1 Mike Kolesnik  mkole...@redhat.com  







Hi there 

Hi Pascal, 






I installed a console on F19, then a F19 host (time 11:09 today). 
Everything works fine, apart from the installation of the mgmt network at the 
end. 
Can someone tell me what's going wrong ? 

Can you please send the output of vdsCaps from the host (vdsClient -s 0 
getVdsCaps)? 




Thxs in advance 
Pascal 




-- 
Pascal Jakobi 
116 rue de Stalingrad 
93100 Montreuil, France 
+33 6 87 47 58 19 
pascal.jak...@gmail.com 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Keepalived on oVirt Hosts has engine networking issues

2013-12-01 Thread Andrew Lau
I put the management and storage on separate VLANs to try avoid the
floating IP address issue temporarily. I also bonded the two nics, but I
don't think that shouldn't matter.

The other server got brought down the other day for some maintenance, I
hope to get it back up in a few days. But I can tell you a few things I
noticed:

ip a - it'll list the floating IP on both servers even if only active on
one.

I've got about 10 other networks so I've snipped out quite a bit.

# ip a
snip
130: bond0.2@bond0: BROADCAST,MULTICAST,MASTER,UP,LOWER_UP mtu 1500 qdisc
noqueue state UP
link/ether 00:10:18:2e:6a:cb brd ff:ff:ff:ff:ff:ff
inet 172.16.0.11/24 brd 172.16.0.255 scope global bond0.2
inet6 fe80::210:18ff:fe2e:6acb/64 scope link
   valid_lft forever preferred_lft forever
131: bond0.3@bond0: BROADCAST,MULTICAST,MASTER,UP,LOWER_UP mtu 1500 qdisc
noqueue state UP
link/ether 00:10:18:2e:6a:cb brd ff:ff:ff:ff:ff:ff
inet 172.16.1.11/24 brd 172.16.1.255 scope global bond0.3
inet 172.16.1.5/32 scope global bond0.3
inet6 fe80::210:18ff:fe2e:6acb/64 scope link
   valid_lft forever preferred_lft forever
/snip


# vdsClient -s 0 getVdsCaps
snip
'storage_network': {'addr': '172.16.1.5',
'bridged': False,
'gateway': '172.16.1.1',
'iface': 'bond0.3',
'interface': 'bond0.3',
'ipv6addrs':
['fe80::210:18ff:fe2e:6acb/64'],
'ipv6gateway': '::',
'mtu': '1500',
'netmask': '255.255.255.255',
'qosInbound': '',
'qosOutbound': ''},
snip
vlans = {'bond0.2': {'addr': '172.16.0.11',
 'cfg': {'BOOTPROTO': 'none',
 'DEFROUTE': 'yes',
 'DEVICE': 'bond0.2',
 'GATEWAY': '172.16.0.1',
 'IPADDR': '172.16.0.11',
 'NETMASK': '255.255.255.0',
 'NM_CONTROLLED': 'no',
 'ONBOOT': 'yes',
 'VLAN': 'yes'},
 'iface': 'bond0',
 'ipv6addrs': ['fe80::210:18ff:fe2e:6acb/64'],
 'mtu': '1500',
 'netmask': '255.255.255.0',
 'vlanid': 2},
 'bond0.3': {'addr': '172.16.1.5',
 'cfg': {'BOOTPROTO': 'none',
 'DEFROUTE': 'no',
 'DEVICE': 'bond0.3',
 'IPADDR': '172.16.1.11',
 'NETMASK': '255.255.255.0',
 'NM_CONTROLLED': 'no',
 'ONBOOT': 'yes',
 'VLAN': 'yes'},
 'iface': 'bond0',
 'ipv6addrs': ['fe80::210:18ff:fe2e:6acb/64'],
 'mtu': '1500',
 'netmask': '255.255.255.255',
 'vlanid': 3},

I hope that's enough info, if not I'll post the full config on both when I
can bring it back up.

Cheers,
Andrew.


On Sun, Dec 1, 2013 at 7:15 PM, Assaf Muller amul...@redhat.com wrote:

 Could you please attach the output of:
 vdsClient -s 0 getVdsCaps
 (Or without the -s, whichever works)
 And:
 ip a

 On both hosts?
 You seem to have made changes since the documentation on the link you
 provided, like separating the management and storage via VLANs on eth0. Any
 other changes?


 Assaf Muller, Cloud Networking Engineer
 Red Hat

 - Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: users users@ovirt.org
 Sent: Sunday, December 1, 2013 4:55:32 AM
 Subject: [Users] Keepalived on oVirt Hosts has engine networking issues

 Hi,

 I have the scenario where I have gluster and ovirt hosts on the same box,
 to keep the gluster volumes highly available incase a box drops I'm using
 keepalived across the boxes and using that IP as the means for the storage
 domain. I documented my setup here in case anyone needs a little more info
 http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/

 However, the engine seems to be picking up the floating IP assigned to
 keepalived as the interface and messing with the ovirtmgmt migration
 network, so migrations are failing as my floating IP gets assigned to the
 ovirtmgmt bridge in the engine however it's not actually there on most
 hosts (except one) so vdsm seems to report destination same as source.

 I've since created a new vlan interface just for storage to avoid the
 ovirtmgmt conflict, but the engine will still pick up the wrong IP on the
 storage vlan because of keepalived. This means I can't use the save network
 feature within the engine as it'll save the floating ip rather than the one
 already 

Re: [Users] ovirtmgmt not installed

2013-12-01 Thread Moti Asayag
Hi Pascal,

- Original Message -
 From: Pascal Jakobi pascal.jak...@gmail.com
 To: Mike Kolesnik mkole...@redhat.com
 Cc: users@ovirt.org, masa...@redhat.com
 Sent: Sunday, December 1, 2013 10:58:40 AM
 Subject: Re: ovirtmgmt not installed
 
 Mike
 
 Here you go. However, please note that I must investigate the connection
 issue that Alon saw. will do it tomorrow.
 Many thanks to you folks.
 P

According to the output of 'nics' element, the 'em1' device is missing his
default gateway entry under the 'cfg' element.

One reason for that can be there is no 
'/etc/sysconfig/network-scripts/ifcfg-em1' 
file and vdsm fails to obtain the default gateway for it.

Could you create this file by your own and retry installing the host ? 
After creating the file (make sure it contains NM_CONTROLLED=no), restart the 
network service
and run 'vdsClient -s 0 getVdsCaps' to make sure 'em1' output contains the data 
its 'cfg' sub-
element.

 
 [root@lab2 ~]# vdsClient -s 0 getVdsCaps
 HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName':
 'iqn.1994-05.com.redhat:eea139a8'}]}
 ISCSIInitiatorName = 'iqn.1994-05.com.redhat:eea139a8'
 bondings = {'bond0': {'addr': '',
   'cfg': {},
   'hwaddr': 'fa:7e:79:56:5a:c2',
   'ipv6addrs': [],
   'mtu': '1500',
   'netmask': '',
   'slaves': []}}
 bridges = {}
 clusterLevels = ['3.0', '3.1', '3.2', '3.3']
 cpuCores = '4'
 cpuFlags =
 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270,model_SandyBridge'
 cpuModel = 'Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz'
 cpuSockets = '1'
 cpuSpeed = '3744.000'
 cpuThreads = '8'
 emulatedMachines = ['pc',
 'q35',
 'isapc',
 'pc-0.10',
 'pc-0.11',
 'pc-0.12',
 'pc-0.13',
 'pc-0.14',
 'pc-0.15',
 'pc-1.0',
 'pc-1.1',
 'pc-1.2',
 'pc-1.3',
 'none']
 guestOverhead = '65'
 hooks = {}
 kvmEnabled = 'false'
 lastClient = '192.168.1.41'
 lastClientIface = 'em1'
 management_ip = '0.0.0.0'
 memSize = '16001'
 netConfigDirty = 'False'
 networks = {}
 nics = {'em1': {'addr': '192.168.1.42',
 'cfg': {},
 'hwaddr': '00:1a:6b:51:de:b4',
 'ipv6addrs': ['fe80::21a:6bff:fe51:deb4/64'],
 'mtu': '1500',
 'netmask': '255.255.255.0',
 'speed': 100}}
 operatingSystem = {'name': 'Fedora', 'release': '2', 'version': '19'}
 packages2 = {'kernel': {'buildtime': 1384978944.0,
 'release': '200.fc19.x86_64',
 'version': '3.11.9'},
  'libvirt': {'buildtime': 1384730741,
  'release': '2.fc19',
  'version': '1.0.5.7'},
  'mom': {'buildtime': 1375215820, 'release': '3.fc19',
 'version': '0.3.2'},
  'qemu-img': {'buildtime': 1383700301,
   'release': '13.fc19',
   'version': '1.4.2'},
  'qemu-kvm': {'buildtime': 1383700301,
   'release': '13.fc19',
   'version': '1.4.2'},
  'spice-server': {'buildtime': 1383130020,
   'release': '3.fc19',
   'version': '0.12.4'},
  'vdsm': {'buildtime': 1384274283, 'release': '11.fc19',
 'version': '4.13.0'}}
 reservedMem = '321'
 software_revision = '11'
 software_version = '4.13'
 supportedENGINEs = ['3.0', '3.1', '3.2', '3.3']
 supportedProtocols = ['2.2', '2.3']
 uuid = '0A583269-811F-E211-AA06-001A6B51DEB4'
 version_name = 'Snow Man'
 vlans = {}
 vmTypes = ['kvm']
 
 
 
 2013/12/1 Mike Kolesnik mkole...@redhat.com
 
  --
 
  *Hi there *
 
 
  Hi Pascal,
 
 
 
 
  *I installed a console on F19, then a F19 host (time 11:09 today).*
 
  *Everything works fine, apart from the installation of the mgmt network at
  the end. *
  *Can 

Re: [Users] Is it possible to limit migration speed and number of concurrent migrations?

2013-12-01 Thread Dan Kenigsberg
On Sat, Nov 30, 2013 at 12:25:31AM +0200, Itamar Heim wrote:
 On 11/29/2013 02:08 PM, Dan Kenigsberg wrote:
 On Fri, Nov 29, 2013 at 11:49:05AM +0100, Ernest Beinrohr wrote:
 I want to put some of my hvs to maintenance, but that causes a
 migration storm, which causes an temporary unavailability of the hv
 and ovirt fences it, while migrations are still running.
 
 So I have to migrate one-by-one manually and then put the hv to maintenance.
 
 Is it possible to limit migration speed and number of concurrent migrations?
 
 You could set max_outgoing_migrations to 1 (in each
 /etc/vdsm/vdsm.conf), but even a single VM migrating may choke your
 connection (depends which wins, the CPU running qemu, or your
 bandwidth).
 
 Currently, your only option is to define a migration network for your
 cluster (say, over a different nic, or over a vlan), and use tools
 external to ovirt to throtle the bandwidth on it (virsh net-edit
 network name and http://libvirt.org/formatnetwork.html#elementQoS can
 come up handy). ovirt-3.4 should expose the ability to set QoS limits on
 migration networks.
 
 
 I thought we can control both the number of concurrent migrations
 and the bandwidth per migration which defaults to 30MB/s?

You are perfectly right: I forgot about 'migration_max_bandwidth', which
vdsm sets to 32MiBps (per single migration).

Sorry for misleading the list and its archives.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] VM wont start

2013-12-01 Thread Maurice James
I get the following error when trying to start a new VM

 

VM CentOS is down. Exit message: internal error iframe vnet9 not in keymap

 

 

I do not have a vnet0 configured anywhere, Why is it looking for it? Im at a
loss. My VMs wont start because eof this error

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt Node 3.0.3-1 for oVirt 3.3 release

2013-12-01 Thread Paul Jansen
Replying to my own message here.
I've just tried to install this latest oVirt node image again and made notes 
where it fell over.

At the 'Keyboard layout selection' screen I press enter to select the default 
'US English' option.
After a few seconds delay I then get this error screen:

An exception occurred
'other'
close

After pressing close I'm then back at the 'Keyboard layout selection' screen.
If I then press enter on 'US international' then tab down to the option to 
continue I can make my way to the screen where I have to enter the password for 
the admin user.  After continuing from there I immediately get an error on the 
install progress screen - at the 40% point.
The error reads:

Exception:
Value Error ('invalid literal for int() with base10: '',)
reboot



I just tried another install and this time selected 'US International' first 
time from the 'Keyboard layout selection' screen.



On Friday, 29 November 2013 10:57 PM, Paul Jansen vla...@yahoo.com.au wrote:
 
Fabian,
I've downloaded the 
ovirt-node-iso-3.0.3-1.1.vdsm.fc19.iso image and have tried installing it on 
two separate machines.
Whether I do a regular install (default boot screen option) or choose 
'reinstall' from the troubleshooting menu I get tripped up at the same spot.
When it comes time to select a keyboard layout (I think this is correct) US 
English is the default.  I press enter at that point and after a short delay 
there is a very uninformative error message that appears.  I can't remember 
what it is off the top of my head and the systems are at work (I'm at home now).
After this point I can continue but the install fails immediately after I get 
past the install target selection screen.
Has anyone successfully installed a node using this iso image?
I even tried removing partitions from the drive beforehand, to the point of 
dd'ing the first 512 bytes to make sure there was no partition table.
This didn't help - I still ran into the install error described above.

I particularly wanted to try an FC19 based node to try out live storage 
migration on NFS, as I was having problems with the EL6.4 node with this.
Is there a chance that that issue might be resolved for EL6 based nodes by an 
upcoming  new EL6.5 based node spin?

Sorry I cannot provide the exact error details for the install issue now, but I 
can get the error detail in the next couple of days if required.

Thanks,
Paul


   This time the install seems to work.
Unfortunately when it comes up the SSH service doesn't seem to be working.  
When I try and ssh to the machine I get the following:
Read from socket failed: Connection reset by peer

The above was after configuring the node for an Ovirt engine, accepting the 
certificate and entering a password.
The ovirt web GUI shows this node as non responsive.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users