Re: [ovirt-users] [ovirt-devel] Networking fails for VM running on Centos6.7.Works on Centos6.5

2015-11-29 Thread mad Engineer
each vm has 4 nics connected to 4 different  bridges which is
connected to 4 different physical interface in access mode VLAN.No
bonding between interfaces.I have IP assigned on "ovirtmgmt" bridge
which is used for storage,migration,console and VM internet access.All
interfaces are 10G with MTU 9000.
During my test only change i made was booting to centos6.5 kernel.

Facing another issue with PXE booting guest machines switching to old
kernel fixes this too

On Sun, Nov 29, 2015 at 1:29 PM, Dan Kenigsberg  wrote:
> On Sat, Nov 28, 2015 at 08:10:06PM +0530, mad Engineer wrote:
>> hello all i am having strange network issue with vms that are running on
>> centos 6.7 ovirt nodes.
>>
>> I recently added one more ovirt node which is running centos6.7 and
>> upgraded from centos6.5 to centos6.7 on all other nodes.
>>
>> All VMs running on nodes with centos6.7 as host Operating system fail to
>> reach network gateway,but if i reboot that same host to centos6.5 kernel
>> everything works fine(with out changing any network configuration).
>>
>> Initially i thought it as configuration issue but its there on all nodes.if
>> i reboot to old kernel everything is working.
>>
>> I am aware about ghost vlan0 issue in centos6.6 kernel.Not aware about any
>> issue in centos6.7 Also all my servers are up to date.
>>
>>
>> All physical interfaces are in access mode VLAN connected to nexus 5k
>> switches.
>>
>>
>> working kernel- 2.6.32-431.20.3.el6.x86_64
>>
>> non working kernel- 2.6.32-573.8.1.el6.x86_64
>
> Can you provide the topology of your VM network config (vlan, bond, bond
> options, bridge options)? Do you have an IP address on the bridge?
>
> (I have not seen this happen myself)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Networking fails for VM running on Centos6.7.Works on Centos6.5

2015-11-28 Thread mad Engineer
hello all i am having strange network issue with vms that are running on
centos 6.7 ovirt nodes.

I recently added one more ovirt node which is running centos6.7 and
upgraded from centos6.5 to centos6.7 on all other nodes.

All VMs running on nodes with centos6.7 as host Operating system fail to
reach network gateway,but if i reboot that same host to centos6.5 kernel
everything works fine(with out changing any network configuration).

Initially i thought it as configuration issue but its there on all nodes.if
i reboot to old kernel everything is working.

I am aware about ghost vlan0 issue in centos6.6 kernel.Not aware about any
issue in centos6.7 Also all my servers are up to date.


All physical interfaces are in access mode VLAN connected to nexus 5k
switches.


working kernel- 2.6.32-431.20.3.el6.x86_64

non working kernel- 2.6.32-573.8.1.el6.x86_64

Any idea?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] engine-setup fails with FATAL: Cannot execute sql command:

2015-03-31 Thread mad Engineer
Hello All,
   Trying ovirt on centos 6.5 but fails at PostgreSQL
configuration
with error

[ ERROR ] Failed to execute stage 'Misc configuration': Command
'/usr/share/ovirt-engine/dbscripts/create_schema.sh' failed to execute

Inside log it shows this

2015-03-31 17:47:15 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
plugin.execute:866 execute-output:
['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost',
'-p', '5432', '-u', 'engine', '-d', 'engine', '-l',
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20150331174256-pehvs7.log',
'-c', 'apply'] stderr:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0210_change_group_ids.sql:59:
ERROR:  could not open relation with OID 32878
CONTEXT:  SQL statement "CREATE temp TABLE tmp_users_groups ON COMMIT
DROP AS SELECT fnsplitteruuid(group_ids) AS group_id, user_id FROM
users"
PL/pgSQL function "__temp_change_group_ids_03_05_0210" line 35 at SQL statement
FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0210_change_group_ids.sql

2015-03-31 17:47:15 DEBUG otopi.context context._executeMethod:152
method exception
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142,
in _executeMethod
method['method']()
  File 
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py",
line 291, in _misc
oenginecons.EngineDBEnv.PGPASS_FILE
  File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in execute
command=args[0],
RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh'
failed to execute
2015-03-31 17:47:15 ERROR otopi.context context._executeMethod:161
Failed to execute stage 'Misc configuration': Command
'/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute


Can some one please help me with this please
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding multiple NFS storage and migrate volumes to other

2015-02-01 Thread mad Engineer
Thank you all for your quick response,
   i have one more
query,if i pause all vms and put hosts to maintenance and then in the
NAS after adding disks can i copy data from share1{with older disks}
to share2{new disks} and later remove share1 and rename share2 to
share1.Finally resume hosts and vms,can this method work?
There is no change in the NFS ip address nor in sharename,only change
is in hard disk type.

This is our backup approach if we can not find one more NAS.

Do you find any issue with this approach.



On Mon, Feb 2, 2015 at 5:38 AM, Dan Yasny  wrote:
> Sure, create and activate a second NFS based storage domain. Move the VMs
> over (right-click VM, -> Move).
> To deactivate the first SD, when it's empty, first put it in maintenance, in
> the DC>Storage tab
>
> On Sun, Feb 1, 2015 at 10:27 AM, mad Engineer 
> wrote:
>>
>> Hello All,
>>  We are using NFS shared storage between hosts,and have
>> running vms.
>> Now we are planning to remove existing disks and replace it with with
>> different disks.
>> Is there any way we can add one more NFS say NFS2 and migrate all
>> volumes to NFS2 while we upgrade primary NFS,is it supported?
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Adding multiple NFS storage and migrate volumes to other

2015-02-01 Thread mad Engineer
Hello All,
 We are using NFS shared storage between hosts,and have running vms.
Now we are planning to remove existing disks and replace it with with
different disks.
Is there any way we can add one more NFS say NFS2 and migrate all
volumes to NFS2 while we upgrade primary NFS,is it supported?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread mad Engineer
Thanks Martin,
  How can we see the changes made by tuned?
for "virtual guest" i see it changes scheduler to deadline.Is there
any way to see what parameters each  profile is going to change

Thanks

On Wed, Jan 14, 2015 at 6:14 PM, Martin PavlĂ­k  wrote:
> Hi,
>
> from the top of my head you could try to play with tuned both with guest and 
> host
>
> ###Install###
>  yum install tuned
>  /etc/init.d/tuned start
>  chkconfig tuned on
>
> ###usage###
> list the profile:
>  tuned-adm list
>
> change your profile:
> tuned-adm profile throughput-performance
>
> maybe try to experiment with other profiles.
>
> HTH
>
> Martin Pavlik
> RHEV QE
>
>> On 14 Jan 2015, at 12:06, mad Engineer  wrote:
>>
>> I am running RHEL6.5 as Host and Guest on HP server.
>> Server has 128G and 48 Core[with HT enabled.]
>>
>> 3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,
>>
>> Guests:
>>
>> VM1:
>> 6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM
>>
>> VM2:
>> 6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM
>>
>> VM3:
>> 2 VCPU ,no pinning,4G RAM
>>
>> HOST
>> host has 10 free CPU+24 HT threads which is not allocated and is available.
>> Host also runs a small application that is single threaded,that uses ~4G RAM.
>>
>> Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
>> dont even use 70% of allocated RAM] also ksm is not running.
>>
>> Networking:
>> Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
>> [This IP is called for accessing application running on host]
>> All vms use virtio and VHOST is on .
>>
>> Traffic on virtual machines are ~3MBps and combined traffic on host is 
>> ~14MBps
>>
>> "VHOST-pid-of-qemu-process" sometimes uses ~35% CPU.
>>
>>
>> There is no packet loss,drop or latency,but the issue is with the same
>> setup on Vmware with same sizing of virtual machines,with the only
>> difference as application running on host has moved to fourth VM.So in
>> Vmware there are 4 VMs.
>> Application gives better number ie on KVM that number is 310 and on
>> vmware it is 570.Application uses UDP to communicate.
>>
>> I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
>> is solved)
>>
>> Thanks for any help
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread mad Engineer
I am running RHEL6.5 as Host and Guest on HP server.
Server has 128G and 48 Core[with HT enabled.]

3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,

Guests:

VM1:
6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM

VM2:
6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM

VM3:
2 VCPU ,no pinning,4G RAM

HOST
host has 10 free CPU+24 HT threads which is not allocated and is available.
Host also runs a small application that is single threaded,that uses ~4G RAM.

Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
dont even use 70% of allocated RAM] also ksm is not running.

Networking:
Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
[This IP is called for accessing application running on host]
All vms use virtio and VHOST is on .

Traffic on virtual machines are ~3MBps and combined traffic on host is ~14MBps

"VHOST-pid-of-qemu-process" sometimes uses ~35% CPU.


There is no packet loss,drop or latency,but the issue is with the same
setup on Vmware with same sizing of virtual machines,with the only
difference as application running on host has moved to fourth VM.So in
Vmware there are 4 VMs.
Application gives better number ie on KVM that number is 310 and on
vmware it is 570.Application uses UDP to communicate.

I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
is solved)

Thanks for any help
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running

2014-12-07 Thread mad Engineer
Thanks Doron,
 Can you confirm this behaviour,i changed (defvar
ksm_free_percent 0.10) and restarted vdsm and now ksmd is behaving
normally.But i don't understand why its behaving aggressively when
free RAM percentages is 22% even though its close to 20 :)

Regards

On Sun, Dec 7, 2014 at 6:13 PM, Doron Fediuck  wrote:
> Hi,
> you can see all the relevant definitions in mom's ksm policy.
> The trigger to run ksm is defined here:
> https://github.com/oVirt/vdsm/blob/master/vdsm/mom.d/03-ksm.policy#L23
>
> You can change this setting (in each host you have) to something that
> suits your load. We have the below bug opened and hopefully handle it
> in one of the next versions.
> https://bugzilla.redhat.com/show_bug.cgi?id=1026294
>
> Doron
>
> ----- Original Message -
>> From: "mad Engineer" 
>> To: "Markus Stockhausen" 
>> Cc: "users" 
>> Sent: Saturday, December 6, 2014 6:13:20 PM
>> Subject: Re: [ovirt-users] ksmd high cpu usage from almost a week with just 
>> one vm running
>>
>> Thanks for the info Markus
>>
>> free -g
>>  total   used   free sharedbuffers cached
>> Mem:47 32 14  0   0 0
>>
>> % of usage  is 68  .
>>
>> Why is ksmd not sleeping.Do you have any idea?
>> regards,
>>
>>
>> On Sat, Dec 6, 2014 at 8:45 PM, Markus Stockhausen
>>  wrote:
>> > Memory usage > 80%: ksm kicks in. There it will run at full speed until
>> > usage is below 80%. There is an open BZ from me. Bad behaviour is
>> > controlled
>> > by mom.
>> >
>> > Markus
>> >
>> > Am 06.12.2014 15:58 schrieb mad Engineer :
>> >
>> > Hello All,
>> >  I am using centos6.5 x64 on a server with 48 G RAM and 8
>> > Cores.Managed by Ovirt
>> > There is only one running VM with RAM 34 G and with 6 VCPU (pinned to
>> > proper numa nodes)
>> >
>> > from top
>> >
>> > top - 06:42:48 up 67 days, 20:05,  1 user,  load average: 0.26, 0.20, 0.17
>> > Tasks: 285 total,   2 running, 282 sleeping,   0 stopped,   1 zombie
>> > Cpu(s):  1.0%us,  1.4%sy,  0.0%ni, 97.5%id,  0.1%wa,  0.0%hi,  0.0%si,
>> > 0.0%st
>> > Mem:  49356468k total, 33977684k used, 15378784k free,   142812k buffers
>> > Swap: 12337144k total,0k used, 12337144k free,   343052k cached
>> >
>> >   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>> >   101 root  25   5 000 R 27.4  0.0   5650:04 [ksmd]
>> > 26004 vdsm   0 -20 3371m  64m 9400 S  9.8  0.1   1653:27
>> > /usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid
>> > 20963 qemu  20   0 38.5g  33g 6792 S  3.9 71.6   5225:43
>> > /usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem
>> > -enable-kvm -m 34096 -realtime mlock=off -smp
>> > 6,maxcpus=160,sockets=80,c
>> >
>> > from /sys/kernel/mm/ksm
>> > pages_unshared  7602322
>> > pages_shared 207023
>> > pages_to_scan   64
>> > pages_volatile31678
>> >
>> > Any idea why ksmd is not coming to normal CPU usage ,on a different
>> > server ksmd was disabled and for testing when i enabled it initially
>> > CPU usage was high but later settled down to 3% ,in that host i have 4
>> > VMs running.
>> >
>> > Before turning off ksmd can any one help me find out why ksmd is
>> > behaving like this.Initially it had 2 virtual machines,because of high
>> > CPU utilization of this guest other is migrated to another host.
>> >
>> > Thanks
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running

2014-12-06 Thread mad Engineer
Thanks for the info Markus

free -g
 total   used   free sharedbuffers cached
Mem:47 32 14  0   0 0

% of usage  is 68  .

Why is ksmd not sleeping.Do you have any idea?
regards,


On Sat, Dec 6, 2014 at 8:45 PM, Markus Stockhausen
 wrote:
> Memory usage > 80%: ksm kicks in. There it will run at full speed until
> usage is below 80%. There is an open BZ from me. Bad behaviour is controlled
> by mom.
>
> Markus
>
> Am 06.12.2014 15:58 schrieb mad Engineer :
>
> Hello All,
>  I am using centos6.5 x64 on a server with 48 G RAM and 8
> Cores.Managed by Ovirt
> There is only one running VM with RAM 34 G and with 6 VCPU (pinned to
> proper numa nodes)
>
> from top
>
> top - 06:42:48 up 67 days, 20:05,  1 user,  load average: 0.26, 0.20, 0.17
> Tasks: 285 total,   2 running, 282 sleeping,   0 stopped,   1 zombie
> Cpu(s):  1.0%us,  1.4%sy,  0.0%ni, 97.5%id,  0.1%wa,  0.0%hi,  0.0%si,
> 0.0%st
> Mem:  49356468k total, 33977684k used, 15378784k free,   142812k buffers
> Swap: 12337144k total,0k used, 12337144k free,   343052k cached
>
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>   101 root  25   5 000 R 27.4  0.0   5650:04 [ksmd]
> 26004 vdsm   0 -20 3371m  64m 9400 S  9.8  0.1   1653:27
> /usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid
> 20963 qemu  20   0 38.5g  33g 6792 S  3.9 71.6   5225:43
> /usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem
> -enable-kvm -m 34096 -realtime mlock=off -smp
> 6,maxcpus=160,sockets=80,c
>
> from /sys/kernel/mm/ksm
> pages_unshared  7602322
> pages_shared 207023
> pages_to_scan   64
> pages_volatile31678
>
> Any idea why ksmd is not coming to normal CPU usage ,on a different
> server ksmd was disabled and for testing when i enabled it initially
> CPU usage was high but later settled down to 3% ,in that host i have 4
> VMs running.
>
> Before turning off ksmd can any one help me find out why ksmd is
> behaving like this.Initially it had 2 virtual machines,because of high
> CPU utilization of this guest other is migrated to another host.
>
> Thanks
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ksmd high cpu usage from almost a week with just one vm running

2014-12-06 Thread mad Engineer
Hello All,
 I am using centos6.5 x64 on a server with 48 G RAM and 8
Cores.Managed by Ovirt
There is only one running VM with RAM 34 G and with 6 VCPU (pinned to
proper numa nodes)

from top

top - 06:42:48 up 67 days, 20:05,  1 user,  load average: 0.26, 0.20, 0.17
Tasks: 285 total,   2 running, 282 sleeping,   0 stopped,   1 zombie
Cpu(s):  1.0%us,  1.4%sy,  0.0%ni, 97.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  49356468k total, 33977684k used, 15378784k free,   142812k buffers
Swap: 12337144k total,0k used, 12337144k free,   343052k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  101 root  25   5 000 R 27.4  0.0   5650:04 [ksmd]
26004 vdsm   0 -20 3371m  64m 9400 S  9.8  0.1   1653:27
/usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid
20963 qemu  20   0 38.5g  33g 6792 S  3.9 71.6   5225:43
/usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem
-enable-kvm -m 34096 -realtime mlock=off -smp
6,maxcpus=160,sockets=80,c

from /sys/kernel/mm/ksm
pages_unshared  7602322
pages_shared 207023
pages_to_scan   64
pages_volatile31678

Any idea why ksmd is not coming to normal CPU usage ,on a different
server ksmd was disabled and for testing when i enabled it initially
CPU usage was high but later settled down to 3% ,in that host i have 4
VMs running.

Before turning off ksmd can any one help me find out why ksmd is
behaving like this.Initially it had 2 virtual machines,because of high
CPU utilization of this guest other is migrated to another host.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
sorry i am wrong its the data domain that stores virtual disk images,I
have no idea how ovirt shares block device across hosts.Looks like i
need to try that for understanding how its implemented.In case of
vmware they uses vmfs file system for sharing block device among
hosts.I currently have no issues with my NFS shared storage so dont
have any plan to use iscsi,but just curious how the implementation
is.In my existing non ovirt environment,SAN block devices is exported
and its shared using GFS.Is ovirt using any simillar method to achieve
shared block device.



On Thu, Dec 4, 2014 at 9:10 PM, mad Engineer  wrote:
> Thanks Gianluca,it says only NFS is supported as export domain.I
> believe export domain is the one i am currently using for live
> migrating vms across hosts.So iscsi is not supported,please correct me
> if i am wrong.Thanks for your help
>
> On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
>  wrote:
>> On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
>> wrote:
>>>
>>> Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
>>> shareable across multiple hosts.Could you tell me how is it achieved
>>> in ovirt.
>>>
>>> Regards
>>>
>>
>> First step the admin guide:
>> http://www.ovirt.org/OVirt_Administration_Guide
>>
>> and in particular the Storage chapter and the related iSCSI part:
>> http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage
>>
>> Also, for multipath in case you need it:
>> http://www.ovirt.org/Feature/iSCSI-Multipath
>>
>> HIH,
>> Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
Thanks Gianluca,it says only NFS is supported as export domain.I
believe export domain is the one i am currently using for live
migrating vms across hosts.So iscsi is not supported,please correct me
if i am wrong.Thanks for your help

On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
 wrote:
> On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
> wrote:
>>
>> Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
>> shareable across multiple hosts.Could you tell me how is it achieved
>> in ovirt.
>>
>> Regards
>>
>
> First step the admin guide:
> http://www.ovirt.org/OVirt_Administration_Guide
>
> and in particular the Storage chapter and the related iSCSI part:
> http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage
>
> Also, for multipath in case you need it:
> http://www.ovirt.org/Feature/iSCSI-Multipath
>
> HIH,
> Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
shareable across multiple hosts.Could you tell me how is it achieved
in ovirt.

Regards

On Thu, Dec 4, 2014 at 5:31 PM, Maor Lipchuk  wrote:
>
>
> - Original Message -
>> From: "mad Engineer" 
>> To: users@ovirt.org
>> Sent: Thursday, December 4, 2014 11:26:32 AM
>> Subject: [ovirt-users] shared storage with iscsi
>>
>> Hello All,
>> I am using NFS as shared storage and is working fine,able
>> to migrate instances across nodes.
>> Is it possible to use iscsi backend and achieve the same,a shared
>> iscsi[i am not able to find a way to do a shared iscsi across hosts]
>> can some one help with a shared iscsi storage for live migration of vm
>> across hosts
>>
>> Thanks
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> Hi,
>
> What do you mean by shared iscsi?
> if you created a new iSCSI Storage Domain in the Data Center and it is active 
> then all the hosts should see it while it is Active.
>
> Regards,
> Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
Hello All,
I am using NFS as shared storage and is working fine,able
to migrate instances across nodes.
Is it possible to use iscsi backend and achieve the same,a shared
iscsi[i am not able to find a way to do a shared iscsi across hosts]
can some one help with a shared iscsi storage for live migration of vm
across hosts

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Auto start VM after host boots

2014-11-20 Thread mad Engineer
Hi all i am trying ovirt 3.4.
How can i auto start all vms once the host is up similar to setting
symlinks in /etc/libvirt/autostart

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HP ILO2 , fence not working, with SSH port specified, a Bug?

2014-06-29 Thread mad Engineer
:)


On Mon, Jun 30, 2014 at 12:12 PM, combuster  wrote:

>  Well if it's a bug then it would be resolved by now :)
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1026662
>
> Had the same doubts as you did. I really don't know why it wouldn't
> connect to iLO if the default port is specified, but I'm glad that you
> found a workaround.
>
> Ivan
>
>
> On 06/30/2014 08:36 AM, mad Engineer wrote:
>
> hi i have an old HP server with ILO2
>
>  on manager i configured power management and configured SSH port to use
> for ILO2
>
>  for checking SSH i manually ssh to ILO and is working fine,
>
>  but power management test always fail with "*Unable to connect/login to
> fencing device*"
>
>
>  log shows its using fence_ilo instead of fence_ilo2
>
>  Thread-18::DEBUG::2014-06-30 08:23:14,106::API::1133::vds::(fenceNode)
> fenceNode(addr=,port=,*agent=ilo*
> ,user=Administrator,passwd=,action=status,secure=,options=ipport=22
> ssl=no)
> Thread-18::DEBUG::2014-06-30 08:23:14,741::API::1159::vds::(fenceNode) rc
> 1 in agent=*fence_ilo*
> ipaddr=xx
> login=Administrator
> action=status
> passwd=
>  ipport=22
> ssl=no out  err *Unable to connect/login to fencing device*
>
>
>
>
>  *Manually testing*
>
>  fence_ilo -a xx  -l Administrator -p x -o status
>  Status: ON
>
>  but with ssh port specified ie *-u *
>
>  fence_ilo -a xx  -l Administrator -p x -o status  -u 22
>  *Unable to connect/login to fencing device*
>
>  So when we specify ssh port it fails and with out ssh port its working
>
>  this is the case with ILO2 also
>
>  for ilo3 and ilo4 since it does not ask for SSH port its working
>
>  Is this a Bug
>
>  Thanks,
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HP ILO2 , fence not working, with SSH port specified, a Bug?

2014-06-29 Thread mad Engineer
hi i have an old HP server with ILO2

on manager i configured power management and configured SSH port to use for
ILO2

for checking SSH i manually ssh to ILO and is working fine,

but power management test always fail with "*Unable to connect/login to
fencing device*"


log shows its using fence_ilo instead of fence_ilo2

Thread-18::DEBUG::2014-06-30 08:23:14,106::API::1133::vds::(fenceNode)
fenceNode(addr=,port=,*agent=ilo*
,user=Administrator,passwd=,action=status,secure=,options=ipport=22
ssl=no)
Thread-18::DEBUG::2014-06-30 08:23:14,741::API::1159::vds::(fenceNode) rc 1
in agent=*fence_ilo*
ipaddr=xx
login=Administrator
action=status
passwd=
ipport=22
ssl=no out  err *Unable to connect/login to fencing device*




*Manually testing*

fence_ilo -a xx  -l Administrator -p x -o status
Status: ON

but with ssh port specified ie *-u *

fence_ilo -a xx  -l Administrator -p x -o status  -u 22
*Unable to connect/login to fencing device*

So when we specify ssh port it fails and with out ssh port its working

this is the case with ILO2 also

for ilo3 and ilo4 since it does not ask for SSH port its working

Is this a Bug

Thanks,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Tune HA timing,is it possible

2014-06-27 Thread mad Engineer
Hi,
 Is it possible to tune time required to reboot VM in case of  host
failures.

in our test  we powered off one of the Host.

Manager was still showing both Guest and Host as UP for around 5 minutes
 and after that it took another 4-5 minutes to start VM.

Is there any tunable parameter that can be modified to bring down the time
required to recognize Host failures.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Node down,but ovirt still shows VM as up

2014-06-27 Thread mad Engineer
Hi,
 I am using Cisco UCS C200 M2 as Host running Centos 6.5 and  KVM,


Power Management not working properly,hence even with Node down Ovirt shows
VM as still up,with uptime of VM increasing(on the manager)


if i continue and save the changes its causing problems:

1. HA is not working ( Node status changed to Non responsive but VM status
still up!!
 2. Restart of Host gives wrong information-it shows host as rebooting but
actually nothing is happening to Host!!

While configuring power  management:

On ovirtmanager edited Power management and chose  cisco_ucs with proper
authentication to CIMC,but when clicked on test,it shows


*Test Failed, Failed: You have to enter plug number Please use '-h' for
usage *

What is this plug,i couldn't find anything in CIMC

This can be the reason for failure of HA and restart

Can someone please help me fix this

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host not restarting through manager

2014-06-27 Thread mad Engineer
Hi,
 I have two UCS server,both are configured for power management, but
when i put host to maintenance and restart it by selecting power
management>restart on the OVirt Manager it shows as rebooting but the host
is not really rebooting,i had the same issue with HP servers ILO2.
What should i do to make it work

Also under Host>power management>test is failing with  "Test Failed,
Failed: You have to enter plug number Please use '-h' for usage"

Thanks for any help
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] HA not working 3.4.2

2014-06-25 Thread mad Engineer
Thanks,
   I tried that and now the "!" is gone and from control panel
 restarted that server and it rebooted but the reboot happened only in
Ovirt control panel and actual server hasn't rebooted nor got shutdown its
up time still hasn't changed.
Any idea whats wrong

Thanks


On Wed, Jun 25, 2014 at 12:46 AM, Joop  wrote:

>  On 24-6-2014 21:02, mad Engineer wrote:
>
> Thanks Arik,
>   I think that's the issue,i havent configured any
> power management but could you tell me what to specify in power management
>
>  i see Address,Username,Password,SSH port and Type.
>
>  Using HP server with ILO2 so i filled Type as ILO2 but its confusing
> what these
>
> * Address,Username,Password *are
>
>  are these ILO credentials or My hypervisor credentials
>
>
>  ILO credentials.
>
> Joop
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] HA not working 3.4.2

2014-06-24 Thread mad Engineer
Thanks Arik,
  I think that's the issue,i havent configured any
power management but could you tell me what to specify in power management

i see Address,Username,Password,SSH port and Type.

Using HP server with ILO2 so i filled Type as ILO2 but its confusing what
these

* Address,Username,Password *are

are these ILO credentials or My hypervisor credentials

Thanks



On Mon, Jun 23, 2014 at 11:50 AM, Arik Hadas  wrote:

> Hi,
>
> - Original Message -
> > Hi i was testing HA of ovirt,(i am from Xenserver background and trying
> to
> > use KVM solution in our environment)
> >
> > Have 2 hosts with NFS storage and i tested migration between these two
> hosts.
> >
> > On Host2 while VM was running on it, i unplugged Power cable and it took
> long
> > minutes to update that VM is down,okkk now long wait..
> > ..
> > .
> > ...
> >
> > VM state changed to unkown and not booted on second Node ie Host1
> >
> > its been half an hour and still the VM is not restarted
> >
> > What can be the issue?What should i do it make the VM in real HA so that
> it
> > will be restarted on the active node.
>
> We don't restart the VM in that case on purpose, since we really don't know
> what's the status of the VM - the host might have been just disconnected
> from
> the network while the VM is still running (and connected to the storage),
> so if we'll also run the VM on a different host, we'll get "split-brain".
>
> In order to restart HA VMs which were running on host that went down, you
> either
> needs to have power-management defined for that host (for automatic
> restart) or
> to trigger it manually by selecting "confirm host has been rebooted" when
> the
> host is changed to non-responsive state (and you need to have additional
> host
> which is up of course).
>
> >
> > How HA is enabled
> > what i did was while creating vm> Advanced option>High
> Availability>Highly
> > Available
> >
> > I am sure this can be a configuration error,Can some one please help me
> with
> > enabling HA.
> >
> > Please Help
> >
> > Thanks
> >
> > ___
> > Devel mailing list
> > de...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HA not working 3.4.2

2014-06-22 Thread mad Engineer
Hi i was  testing HA of ovirt,(i am from Xenserver background and trying to
use KVM solution in our environment)

Have 2 hosts with NFS storage and i tested migration between these two
hosts.

On Host2 while VM was running on it, i unplugged Power cable and it took
long minutes to update that VM is down,okkk now  long wait..
..
.
...

   VM state changed to *unkown *and not booted on second Node ie Host1

its been half an hour and still the VM is not restarted

What can be the issue?What should i do it make the VM in real HA so that it
will be restarted on the active node.

*How HA is enabled*
what i did was while creating vm> Advanced option>High Availability>Highly
Available

I am sure this can be a configuration error,Can some one please help me
with enabling HA.

Please Help

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users