[Users] Ovirt 3.1: Failed to add Storage Domain

2012-09-04 Thread Dmitriy A Pyryakov


Hello,

I have installed oVirt engine 3.1 configured for non-local nfs ISO storage
(Fedora 17 kernel 3.5.3-1.fc17.x86_64), one node with VDSM (same OS as
engine), server with targetcli (ISCSI) and NFS server as ISO Library
(CentOS 6.3).

There is NFS server configuration:
# cat /etc/exports
/nfs/iso192.168.10.0/24
(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

# cat /etc/sysconfig/nfs | grep -v #
NFS4_SUPPORT=yes

# ls -l /nfs/
drwxr-xr-x 3 vdsm kvm 4096 Sep  4 09:48 iso

# getsebool -a
getsebool:  SELinux is disabled

# ps axu | grep nfs
root  8289  0.0  0.0  0 0 ?S09:47   0:00 [nfsd4]
root  8290  0.0  0.0  0 0 ?S09:47   0:00
[nfsd4_callbacks]
root  8291  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8292  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8293  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8294  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8295  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8296  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8297  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8298  0.0  0.0  0 0 ?S09:47   0:00 [nfsd]
root  8555  0.0  0.0 103240   840 pts/0S+   10:46   0:00 grep nfs

# cat /etc/passwd | grep vdsm  cat /etc/group | grep kvm
vdsm:x:36:36::/home/vdsm:/bin/bash
kvm:x:36:

result of nfs-check program form hypervisor node as root (can't change user
to vdsm):
# python nfs-check.py storage.ovirt.com:/nfs/iso
Current hostname: hyper1.ovirt.com - IP addr 192.168.10.11
Trying to /bin/mount -t nfs storage.ovirt.com:/nfs/iso...
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..
Done!

result of nfs-check program form oVirt engine host as vdsm:
$ python nfs-check.py storage.ovirt.com:/nfs/iso
You must be root to run this script.

as root:

# python nfs-check.py storage.ovirt.com:/nfs/iso
Current hostname: admin.ovirt.com - IP addr 192.168.10.10
Trying to /bin/mount -t nfs storage.ovirt.com:/nfs/iso...
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..
Done!

When I try to create New Domain (Domain Function / Storage Type - ISO /
NFS) Export Path: storage.ovirt.com:/nfs/iso - I have an error: A Request
to the Server failed with the following Status Code: 500.
After it I cant view some files on NFS server:
# find /nfs/iso/
/nfs/iso/
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/images
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/images/----
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/dom_md
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/dom_md/outbox
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/dom_md/metadata
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/dom_md/inbox
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/dom_md/leases
/nfs/iso/cca67860-4d58-4385-ba71-27fdfe21d0e9/dom_md/ids

vdsm log from hypervisor node: http://195.58.6.45/shit/vdsm.log
engine log: http://195.58.6.45/shit/engine.log

I have no iptables configured on any of my hosts.

How can I add an ISO share to my Data Center?

- -
Dmitriy Pyryakov
VimpelCom Ltd.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] HP Integrated Lights Out 3 standard

2012-09-06 Thread Dmitriy A Pyryakov


Hello,

I have no ilo3 type of fencing device in Power Management tab of Edit
Host menu (only ilo).
If I try to use ilo for Integrated Lights-Out 3 - test is failed.

In http://wiki.ovirt.org/wiki/Quick_Start_Guide#Attach_Fedora_Host  page I
find an information that ilo3 must be presented in Power Management tab.

What can I do to make ilo3 be management?


- -
Dmitriy Pyryakov
VimpelCom Ltd.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] HP Integrated Lights Out 3

2012-09-09 Thread Dmitriy A Pyryakov


Hello,

I need to use the HP i-lo 3 as fencing device for OOB power management.

In man 8 page of fence_ipmilan I find that I must use lanplus and
power_wait=4 options. When I insert this options in Option line of Power
Management tab and press Test button - test is failed.

There is the part of vdsm.log at this time:

Thread-43892::DEBUG::2012-09-07 13:14:03,094::API::1024::vds::(fenceNode)
fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=fence_ilo,passwd=,action=status,secure=,options=)
Thread-43892::DEBUG::2012-09-07 13:14:04,116::API::1050::vds::(fenceNode)
rc 1 in agent=fence_ipmilan
ipaddr=192.168.10.103
login=fence_ilo
option=status
passwd=
 out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
Failed
 err

My otions are not presented. It looks like a bug.

How can I fix it? How can I find a location of the script who run this test
in my system?

- -
Dmitriy Pyryakov
VimpelCom Ltd.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HP Integrated Lights Out 3

2012-09-10 Thread Dmitriy A Pyryakov
No, I don't see an ilo3 type of fencing device in my Power Management
tab.

I have fence_ilo3 command on my hypervisor host.

When I run it with following options:
privlvl=user
ipaddr=192.168.10.103
login=fence_ilo
passwd=
operation=status

it print: Getting status of IPMI:192.168.10.103...Chassis power = On
Done

oVirt Engine Version: 3.1.0-2.fc17

- -
Dmitriy Pyryakov
VimpelCom Ltd.



От: Itamar Heim ih...@redhat.com
Кому:   Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
Копия:  users@ovirt.org, Eli Mesika emes...@redhat.com
Дата:   10.09.2012 14:28
Тема:   Re: [Users] HP Integrated Lights Out 3



On 09/10/2012 06:17 AM, Dmitriy A Pyryakov wrote:
 Hello,

 I need to use the HP i-lo 3 as fencing device for OOB power management.

 In man 8 page of fence_ipmilan I find that I must use lanplus and
 power_wait=4 options. When I insert this options in Option line of
 Power Management tab and press Test button - test is failed.

 There is the part of vdsm.log at this time:

 Thread-43892::DEBUG::2012-09-07
 13:14:03,094::API::1024::vds::(fenceNode)
 fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=fence_ilo,passwd=,action=status,secure=,options=)

 Thread-43892::DEBUG::2012-09-07
 13:14:04,116::API::1050::vds::(fenceNode) rc 1 in agent=fence_ipmilan
 ipaddr=192.168.10.103
 login=fence_ilo
 option=status
 passwd=
 out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
 Failed
 err

 My otions are not presented. It looks like a bug.

 How can I fix it? How can I find a location of the script who run this
 test in my system?

strange, i thought we added ilo3 fence type back in 3.0 which wraps
ipmilan with lanplus,power_wait=4.
don'y you see ilo3 as an option?


 - -
 Dmitriy Pyryakov

 VimpelCom Ltd.



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



inline: graycol.gif___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HP Integrated Lights Out 3

2012-09-10 Thread Dmitriy A Pyryakov
Yes, but oVirt don't pass this options to fence_ipmilan command.

There are all logged options:
Thread-43892::DEBUG::2012-09-07 13:14:03,094::API::1024::vds::(fenceNode)
fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=fence_ilo,passwd=,action=status,secure=,options=)
Thread-43892::DEBUG::2012-09-07 13:14:04,116::API::1050::vds::(fenceNode)
rc 1 in agent=fence_ipmilan
ipaddr=192.168.10.103
login=fence_ilo
option=status
passwd=
 out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
Failed
 err


- -
Dmitriy Pyryakov
VimpelCom Ltd.



От: Itamar Heim ih...@redhat.com
Кому:   Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
Копия:  Eli Mesika emes...@redhat.com, users@ovirt.org
Дата:   10.09.2012 14:48
Тема:   Re: [Users] HP Integrated Lights Out 3



On 09/10/2012 11:45 AM, Dmitriy A Pyryakov wrote:
 No, I don't see an ilo3 type of fencing device in my Power Management
tab.

 I have fence_ilo3 command on my hypervisor host.

did you try choosing ipmilan and passing options of lanplus,power_wait=4?



 When I run it with following options:
 privlvl=user
 ipaddr=192.168.10.103
 login=fence_ilo
 passwd=
 operation=status

 it print: Getting status of IPMI:192.168.10.103...Chassis power = On
 Done

 oVirt Engine Version: 3.1.0-2.fc17


 - -
 Dmitriy Pyryakov
 VimpelCom Ltd.


 Inactive hide details for Itamar Heim ---10.09.2012 14:28:19---On
 09/10/2012 06:17 AM, Dmitriy A Pyryakov wrote:  Hello,Itamar Heim
 ---10.09.2012 14:28:19---On 09/10/2012 06:17 AM, Dmitriy A Pyryakov
 wrote:  Hello,

 От: Itamar Heim ih...@redhat.com
 Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Копия: users@ovirt.org, Eli Mesika emes...@redhat.com
 Дата: 10.09.2012 14:28
 Тема: Re: [Users] HP Integrated Lights Out 3
 



 On 09/10/2012 06:17 AM, Dmitriy A Pyryakov wrote:
   Hello,
  
   I need to use the HP i-lo 3 as fencing device for OOB power
management.
  
   In man 8 page of fence_ipmilan I find that I must use lanplus and
   power_wait=4 options. When I insert this options in Option line of
   Power Management tab and press Test button - test is failed.
  
   There is the part of vdsm.log at this time:
  
   Thread-43892::DEBUG::2012-09-07
   13:14:03,094::API::1024::vds::(fenceNode)
  
 fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=fence_ilo,passwd=,action=status,secure=,options=)

   Thread-43892::DEBUG::2012-09-07
   13:14:04,116::API::1050::vds::(fenceNode) rc 1 in agent=fence_ipmilan
   ipaddr=192.168.10.103
   login=fence_ilo
   option=status
   passwd=
   out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
   Failed
   err
  
   My otions are not presented. It looks like a bug.
  
   How can I fix it? How can I find a location of the script who run this
   test in my system?

 strange, i thought we added ilo3 fence type back in 3.0 which wraps
 ipmilan with lanplus,power_wait=4.
 don'y you see ilo3 as an option?

  
   - -
   Dmitriy Pyryakov
  
   VimpelCom Ltd.
  
  
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  





inline: graycol.gif___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] HA: Re: HP Integrated Lights Out 3

2012-09-10 Thread Dmitriy A Pyryakov

engine=# select * from vdc_options where option_name in
('FenceAgentDefaultParams','FenceAgentMapping','VdsFenceOptionMapping','VdsFenceType');
 option_id |   option_name   |
option_value
| version
---+-++-
77 | FenceAgentMapping   | ilo3=ipmilan
| general
76 | FenceAgentDefaultParams | ilo3:lanplus,power_wait=4
| general
   323 | VdsFenceType|
alom,apc,bladecenter,drac5,eps,ilo,ipmilan,rsa,rsb,wti,cisco_ucs
| 3.1
   322 | VdsFenceType|
alom,apc,bladecenter,drac5,eps,ilo,ilo3,ipmilan,rsa,rsb,wti,cisco_ucs
| 3.0
   321 | VdsFenceType|
alom,apc,bladecenter,drac5,eps,ilo,ipmilan,rsa,rsb,wti,cisco_ucs
| 2.2
   318 | VdsFenceOptionMapping   |
alom:secure=secure,port=ipport;apc:secure=secure,port=ipport,slot=port;bladecenter:secure=secure,port=ipport,slot=port;drac5:secure=secure,slot=port;eps:slot=port;ilo:secure=ssl,port=ipport;ipmilan:;rsa:secure=secure,port=ipport;rsb:;wti:secure=secure,port=ipport,slot=port;cisco_ucs:secure=ssl,slot=port;ilo3:
 | general
(6 rows)

--
Dmitriy Pyryakov
VimpelCom Ltd.



От: Eli Mesika emes...@redhat.com
Кому:   Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
Копия:  users@ovirt.org, Itamar Heim ih...@redhat.com
Дата:   10.09.2012 15:02
Тема:   Re: [Users] HP Integrated Lights Out 3





- Original Message -
 From: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 To: Itamar Heim ih...@redhat.com
 Cc: Eli Mesika emes...@redhat.com, users@ovirt.org
 Sent: Monday, September 10, 2012 11:57:25 AM
 Subject: Re: [Users] HP Integrated Lights Out 3





 Yes, but oVirt don't pass this options to fence_ipmilan command.
 There are all logged options: Thread-43892::DEBUG::2012-09-07
 13:14:03,094::API::1024::vds::(fenceNode)
 fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=fence_ilo,passwd=,action=status,secure=,options=)

 Thread-43892::DEBUG::2012-09-07
 13:14:04,116::API::1050::vds::(fenceNode) rc 1 in
 agent=fence_ipmilan
 ipaddr=192.168.10.103
 login=fence_ilo
 option=status
 passwd=
 out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
 Failed
 err


Hi
As you see, options arrived to VDSM as empty string
Can you please paste the output of the following SQL

select * from vdc_options where option_name = 'FenceAgentDefaultParams';

Thanks


 - - Dmitriy Pyryakov VimpelCom Ltd.
 Inactive hide details for Itamar Heim ---10.09.2012 14:48:06---On
 09/10/2012 11:45 AM, Dmitriy A Pyryakov wrote:  No, I don't Itamar
 Heim ---10.09.2012 14:48:06---On 09/10/2012 11:45 AM, Dmitriy A
 Pyryakov wrote:  No, I don't see an ilo3 type of fencing device i

 От: Itamar Heim ih...@redhat.com
 Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Копия: Eli Mesika emes...@redhat.com, users@ovirt.org
 Дата: 10.09.2012 14:48
 Тема: Re: [Users] HP Integrated Lights Out 3




 On 09/10/2012 11:45 AM, Dmitriy A Pyryakov wrote:
  No, I don't see an ilo3 type of fencing device in my Power
  Management tab.
 
  I have fence_ilo3 command on my hypervisor host.

 did you try choosing ipmilan and passing options of
 lanplus,power_wait=4?


 
  When I run it with following options:
  privlvl=user
  ipaddr=192.168.10.103
  login=fence_ilo
  passwd=
  operation=status
 
  it print: Getting status of IPMI:192.168.10.103...Chassis power =
  On
  Done
 
  oVirt Engine Version: 3.1.0-2.fc17
 
 
  - -
  Dmitriy Pyryakov
  VimpelCom Ltd.
 
 
  Inactive hide details for Itamar Heim ---10.09.2012 14:28:19---On
  09/10/2012 06:17 AM, Dmitriy A Pyryakov wrote:  Hello,Itamar Heim
  ---10.09.2012 14:28:19---On 09/10/2012 06:17 AM, Dmitriy A Pyryakov
  wrote:  Hello,
 
  От: Itamar Heim ih...@redhat.com
  Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  Копия: users@ovirt.org, Eli Mesika emes...@redhat.com
  Дата: 10.09.2012 14:28
  Тема: Re: [Users] HP Integrated Lights Out 3
 

 
 
 
  On 09/10/2012 06:17 AM, Dmitriy A Pyryakov wrote:
   Hello,
  
   I need to use the HP i-lo 3 as fencing device for OOB power
   management.
  
   In man 8 page of fence_ipmilan I find that I must use lanplus and
   power_wait=4 options. When I insert this options in Option line
   of
   Power Management tab and press Test button - test is failed.
  
   There is the part of vdsm.log at this time:
  
   Thread-43892::DEBUG::2012-09-07
   13:14:03,094::API::1024::vds::(fenceNode)
  
  fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=fence_ilo,passwd=,action=status,secure=,options=)

   Thread-43892::DEBUG::2012-09-07
   13:14:04,116::API::1050::vds::(fenceNode) rc 1 in
   agent=fence_ipmilan

[Users] HA: Re: HA: Re: HP Integrated Lights Out 3

2012-09-10 Thread Dmitriy A Pyryakov
Now ilo3 presented in Power Management but still doesn't work.

part of vdsm.log:

Thread-258783::DEBUG::2012-09-10 17:39:06,359::API::1024::vds::(fenceNode)
fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=,options=)
Thread-258783::DEBUG::2012-09-10 17:39:07,386::API::1050::vds::(fenceNode)
rc 1 in agent=fence_ipmilan
ipaddr=192.168.10.103
login=Administrator
option=status
passwd=
 out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
Failed
 err

part of engine.log

2012-09-10 17:41:51,089 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--0.0.0.0-8009-10) START, FenceVdsVDSCommand(vdsId =
71528b6e-f5e6-11e1-a15f-0011856cf23e, targetVdsId =
8dddf9e6-f80a-11e1-b036-0011856cf23e, action = Status, ip = 192.168.10.103,
port = , type = ipmilan, user = Administrator, password = **, options =
'lanplus,power_wait=4'), log id: f442157
2012-09-10 17:41:53,226 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--0.0.0.0-8009-10) FINISH, FenceVdsVDSCommand, return: Test Failed,
Host Status is: unknown. The fence-agent script reported the following
error: Getting status of IPMI:192.168.10.103...Chassis power = Unknown
Failed
, log id: f442157

- -
Dmitriy Pyryakov
VimpelCom Ltd.



От: Eli Mesika emes...@redhat.com
Кому:   Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
Копия:  users@ovirt.org, Itamar Heim ih...@redhat.com
Дата:   10.09.2012 17:29
Тема:   Re: HA: Re: [Users] HP Integrated Lights Out 3





- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Eli Mesika emes...@redhat.com
 Cc: users@ovirt.org, Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Sent: Monday, September 10, 2012 2:14:31 PM
 Subject: Re: HA: Re: [Users] HP Integrated Lights Out 3

 On 09/10/2012 02:07 PM, Eli Mesika wrote:
 
 
  - Original Message -
  From: Itamar Heim ih...@redhat.com
  To: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  Cc: Eli Mesika emes...@redhat.com, users@ovirt.org
  Sent: Monday, September 10, 2012 12:51:03 PM
  Subject: Re: HA: Re: [Users] HP Integrated Lights Out 3
 
  On 09/10/2012 12:21 PM, Dmitriy A Pyryakov wrote:
  engine=# select * from vdc_options where option_name in
 
('FenceAgentDefaultParams','FenceAgentMapping','VdsFenceOptionMapping','VdsFenceType');

  option_id | option_name | option_value | version
 
---+-++-

  77 | FenceAgentMapping | ilo3=ipmilan | general
  76 | FenceAgentDefaultParams | ilo3:lanplus,power_wait=4 |
  general
  323 | VdsFenceType |
  alom,apc,bladecenter,drac5,eps,ilo,ipmilan,rsa,rsb,wti,cisco_ucs
  |
  3.1
  322 | VdsFenceType |
 
  eli - ilo3 is missing in 3.1?
  Yes, this is certainly a bug, I will open it and resolve ASAP

 this is a db change, so dmitriy can apply it easily as well.
Sure
Dmitriy , please apply
1) run on your Postgres engine :
select fn_db_update_config_value
('VdsFenceType','alom,apc,bladecenter,drac5,eps,ilo,ilo3,ipmilan,rsa,rsb,wti,cisco_ucs','3.1');


2) Restart engine
3) Check again (you should have ilo3 now in the UI list)
4) Let me know if it works

Thanks


  and maybe another bug on not passing fence options which i
  remember
  we
  had at some point
  Yes, I believe that the above will solve the issue since we had
  already tested ilo3 unless we have a regression I am not aware
  about.

 lets hope so. question is if it was fixed after ovirt 3.1 was done.

 
 
  alom,apc,bladecenter,drac5,eps,ilo,ilo3,ipmilan,rsa,rsb,wti,cisco_ucs
  | 3.0
  321 | VdsFenceType |
  alom,apc,bladecenter,drac5,eps,ilo,ipmilan,rsa,rsb,wti,cisco_ucs
  |
  2.2
  318 | VdsFenceOptionMapping |
 
alom:secure=secure,port=ipport;apc:secure=secure,port=ipport,slot=port;bladecenter:secure=secure,port=ipport,slot=port;drac5:secure=secure,slot=port;eps:slot=port;ilo:secure=ssl,port=ipport;ipmilan:;rsa:secure=secure,port=ipport;rsb:;wti:secure=secure,port=ipport,slot=port;cisco_ucs:secure=ssl,slot=port;ilo3:

  | general
  (6 rows)
 
  --
 
   Dmitriy Pyryakov
   VimpelCom Ltd.
 
 
  Inactive hide details for Eli Mesika ---10.09.2012
  15:02:16
  Original Message -  From: Dmitriy A Pyryakov DPyryaEli
  Mesika
  ---10.09.2012 15:02:16 Original Message -  From:
  Dmitriy A
  Pyryakov dpyrya...@ekb.beeline.ru
 
  От: Eli Mesika emes...@redhat.com
  Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  Копия: users@ovirt.org, Itamar Heim ih...@redhat.com
  Дата: 10.09.2012 15:02
  Тема: Re: [Users] HP Integrated Lights Out 3
 

 
 
 
 
 
  - Original Message -
 From: Dmitriy A Pyryakov

[Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out 3

2012-09-13 Thread Dmitriy A Pyryakov
Itamar Heim ih...@redhat.com написано 14.09.2012 04:45:31:

 От: Itamar Heim ih...@redhat.com
 Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Копия: users@ovirt.org
 Дата: 14.09.2012 04:45
 Тема: Re: HA: Re: [Users] HA: Re:   HP Integrated Lights Out 3

 On 09/13/2012 08:42 AM, Dmitriy A Pyryakov wrote:
  Itamar Heim ih...@redhat.com написано 13.09.2012 11:09:24:
 
От: Itamar Heim ih...@redhat.com
Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
Копия: Darrell Budic darrell.bu...@bigwells.net, users@ovirt.org
Дата: 13.09.2012 11:09
Тема: Re: [Users] HA: Re: HP Integrated Lights Out 3
   
On 09/13/2012 06:00 AM, Dmitriy A Pyryakov wrote:
 Darrell Budic darrell.bu...@bigwells.net написано 13.09.2012
  07:43:44:

  От: Darrell Budic darrell.bu...@bigwells.net
  Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  Копия: Eli Mesika emes...@redhat.com, users@ovirt.org
  Дата: 13.09.2012 07:43
  Тема: Re: [Users] HP Integrated Lights Out 3
 
  I have this problem too. I actually tracked it down to the
engine
  not passing the arguments to the fence scripts but then got
  distracted and never followed up with a report. In my case, the
data
  base was correct, and if I ran the fence script by hand, it
would
  work, but the vdsm wasn't running it with the options or even
all
  the arguments (username/pw, etc). I've tried it with ilo3 and
  ipmilan both, same issue.
 
  If you'd like, I can recreate some of my debugging, I'd gotten
so
  far as to to hack some print statements into the fence scripts
to
  demonstrate what was happening. Lost that with some rebuilds,
but
  easy enough to recreate...
 
  -Darrell

 Hello, Darrell.

 It would be great!
   
may i suggest you first try to apply this patch to vdsm (just edit
the
relevant line in vdsm)
   
commit 59934118e3a30c57539d2b71016532bdd9c4ab17
Author: Roy Golan rgo...@redhat.com
Date: Thu Aug 9 16:34:10 2012 +0300
   
fenceNode API is missing the options argument
   
Change-Id: Ib2ce9b0f71040f9198413fa06c5d8768994842ec
Signed-off-by: Roy Golan rgo...@redhat.com
Reviewed-on: http://gerrit.ovirt.org/7058
Reviewed-by: Dan Kenigsberg dan...@redhat.com
Reviewed-by: Omer Frenkel ofren...@redhat.com
Tested-by: Omer Frenkel ofren...@redhat.com
   
diff --git a/vdsm/BindingXMLRPC.py b/vdsm/BindingXMLRPC.py
index cc5300f..8b548e4 100644
--- a/vdsm/BindingXMLRPC.py
+++ b/vdsm/BindingXMLRPC.py
@@ -357,7 +357,7 @@ class BindingXMLRPC(object):
secure=False, options=''):
api = API.Global()
return api.fenceNode(addr, port, agent, username, password,
- action, secure)
+ action, secure, options)
   
def setLogLevel(self, level):
api = API.Global()
   
 
  There is my part of old /usr/share/vdsm/BindingXMLRPC.py file from
proxy
  host:
 
  def fenceNode(self, addr, port, agent, username, password, action,
  secure=False, options=''):
  api = API.Global(self.cif)
  return api.fenceNode(addr, port, agent, username, password,
  action, secure)
 
  there is replased:
 
  def fenceNode(self, addr, port, agent, username, password, action,
  secure=False, options=''):
  api = API.Global(self.cif)
  return api.fenceNode(addr, port, agent, username, password,
  action, secure, options)
 
  I restart ovirt-engine and still see no option presented in vdsm.log.
  Test still failed.

 this is a vdsm change, not an ovirt-engine (restart vdsm?)
 is this ovirt node or plain fedora/el6?

I change vdsm...

Restarting vdsm fix it!
My hosts are two fedora 17 with degraded to 3.4 kernel (3.4.9-2.fc16.x86_64
)

Now ilo3 and ipmilan work fine!
Thank you so much!___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out 3

2012-09-13 Thread Dmitriy A Pyryakov
Darrell Budic darrell.bu...@bigwells.net написано 14.09.2012 05:32:33:

 От: Darrell Budic darrell.bu...@bigwells.net
 Кому: Itamar Heim ih...@redhat.com
 Копия: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru, users@ovirt.org
 Дата: 14.09.2012 05:32
 Тема: Re: [Users] HA: Re:  HA: Re:   HP Integrated Lights Out 3

 That fix worked for me (ipmilan wise, anyway. Still no go on ilo,
 but we knew that, right?). Thanks Itamar!
 Dmitriy, make sure you do this to all your host nodes, it may run
 the test from any of them. You'll also want to be sure you delete /
 usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the compiled
 python is likely to still get used.

there is no necessity

 Finally, I did need to restart
 vdsmd on all my nodes, service vdsmd restart on my Centos 6.3
 system.

that's right!

 Glad to know you can do that without causing problems for running vms.


yeah

thank you.

 I did notice that the ovirt management GUI still shows 3 Alerts in
 the alert area, and they are all Power Management test failed
 errors dated from the first time their particular node was added to
 the cluster. This is even after restarting a vdsmd again and seeing
 Host xxx power management was verified successfully. in the event log.

   -Darrell

 On Sep 13, 2012, at 5:45 PM, Itamar Heim wrote:

 On 09/13/2012 08:42 AM, Dmitriy A Pyryakov wrote:
 Itamar Heim ih...@redhat.com написано 13.09.2012 11:09:24:

  От: Itamar Heim ih...@redhat.com
  Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  Копия: Darrell Budic darrell.bu...@bigwells.net, users@ovirt.org
  Дата: 13.09.2012 11:09
  Тема: Re: [Users] HA: Re: HP Integrated Lights Out 3
 
  On 09/13/2012 06:00 AM, Dmitriy A Pyryakov wrote:
   Darrell Budic darrell.bu...@bigwells.net написано 13.09.2012
 07:43:44:
  
От: Darrell Budic darrell.bu...@bigwells.net
Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
Копия: Eli Mesika emes...@redhat.com, users@ovirt.org
Дата: 13.09.2012 07:43
Тема: Re: [Users] HP Integrated Lights Out 3
   
I have this problem too. I actually tracked it down to the engine
not passing the arguments to the fence scripts but then got
distracted and never followed up with a report. In my case, the
data
base was correct, and if I ran the fence script by hand, it would
work, but the vdsm wasn't running it with the options or even all
the arguments (username/pw, etc). I've tried it with ilo3 and
ipmilan both, same issue.
   
If you'd like, I can recreate some of my debugging, I'd gotten so
far as to to hack some print statements into the fence scripts to
demonstrate what was happening. Lost that with some rebuilds, but
easy enough to recreate...
   
-Darrell
  
   Hello, Darrell.
  
   It would be great!
 
  may i suggest you first try to apply this patch to vdsm (just edit the
  relevant line in vdsm)
 
  commit 59934118e3a30c57539d2b71016532bdd9c4ab17
  Author: Roy Golan rgo...@redhat.com
  Date: Thu Aug 9 16:34:10 2012 +0300
 
  fenceNode API is missing the options argument
 
  Change-Id: Ib2ce9b0f71040f9198413fa06c5d8768994842ec
  Signed-off-by: Roy Golan rgo...@redhat.com
  Reviewed-on: http://gerrit.ovirt.org/7058
  Reviewed-by: Dan Kenigsberg dan...@redhat.com
  Reviewed-by: Omer Frenkel ofren...@redhat.com
  Tested-by: Omer Frenkel ofren...@redhat.com
 
  diff --git a/vdsm/BindingXMLRPC.py b/vdsm/BindingXMLRPC.py
  index cc5300f..8b548e4 100644
  --- a/vdsm/BindingXMLRPC.py
  +++ b/vdsm/BindingXMLRPC.py
  @@ -357,7 +357,7 @@ class BindingXMLRPC(object):
  secure=False, options=''):
  api = API.Global()
  return api.fenceNode(addr, port, agent, username, password,
  - action, secure)
  + action, secure, options)
 
  def setLogLevel(self, level):
  api = API.Global()
 

 There is my part of old /usr/share/vdsm/BindingXMLRPC.py file from proxy
 host:

 def fenceNode(self, addr, port, agent, username, password, action,
 secure=False, options=''):
 api = API.Global(self.cif)
 return api.fenceNode(addr, port, agent, username, password,
 action, secure)

 there is replased:

 def fenceNode(self, addr, port, agent, username, password, action,
 secure=False, options=''):
 api = API.Global(self.cif)
 return api.fenceNode(addr, port, agent, username, password,
 action, secure, options)

 I restart ovirt-engine and still see no option presented in vdsm.log.
 Test still failed.

 this is a vdsm change, not an ovirt-engine (restart vdsm?)
 is this ovirt node or plain fedora/el6?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 Darrell Budic
 Bigwells Technology LLC
 office: 312.529.7816
 cell: 608.239.4628___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out 3

2012-09-20 Thread Dmitriy A Pyryakov

I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17, second
-2.5.1-1.0.fc17). SPM on 2.5.0-2.0.fc17.

ilo3 don't work. In vdsm.log now options presented. BindingXMLRPC.py not
found on proxy host in /usr/share/vdsm. Only BindingXMLRPC.pyc file.



Itamar Heim ih...@redhat.com написано 14.09.2012 13:46:35:

 От: Itamar Heim ih...@redhat.com
 Кому: Darrell Budic darrell.bu...@bigwells.net
 Копия: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru, users@ovirt.org
 Дата: 14.09.2012 13:46
 Тема: Re: [Users] HA: Re:  HA: Re:   HP Integrated Lights Out 3

 On 09/14/2012 02:32 AM, Darrell Budic wrote:
  That fix worked for me (ipmilan wise, anyway. Still no go on ilo, but
we
  knew that, right?). Thanks Itamar!
 
  Dmitriy, make sure you do this to all your host nodes, it may run the
  test from any of them. You'll also want to be sure you delete
  /usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the compiled
  python is likely to still get used. Finally, I did need to restart
vdsmd
  on all my nodes, service vdsmd restart on my Centos 6.3 system. Glad
  to know you can do that without causing problems for running vms.
 
  I did notice that the ovirt management GUI still shows 3 Alerts in the
  alert area, and they are all Power Management test failed errors
dated
  from the first time their particular node was added to the cluster.
This
  is even after restarting a vdsmd again and seeing Host xxx power
  management was verified successfully. in the event log.

 because the engine doesn't go and run 'test power management' all the
 time...
 click edit host, power management tab, click 'test'.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HP Integrated Lights Out 3

2012-09-20 Thread Dmitriy A Pyryakov
Eli Mesika emes...@redhat.com написано 20.09.2012 14:55:41:

 От: Eli Mesika emes...@redhat.com
 Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Копия: users@ovirt.org, Itamar Heim ih...@redhat.com
 Дата: 20.09.2012 14:55
 Тема: Re: [Users] HA: Re:  HA: Re:  HA: Re:   HP Integrated Lights Out 3



 - Original Message -
  From: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  To: Itamar Heim ih...@redhat.com
  Cc: users@ovirt.org
  Sent: Thursday, September 20, 2012 9:59:34 AM
  Subject: [Users] HA: Re:  HA: Re:  HA: Re:   HP Integrated Lights Out 3
 
 
 
 
 
 
  I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,
  second -2.5.1-1.0.fc17). SPM on 2.5.0-2.0.fc17. ilo3 don't work. In
  vdsm.log now options presented.

 Can you paste here the call to fenceNode from the vdsm.log, thanks
Of course,

vdsm.log
Thread-1882::DEBUG::2012-09-20 09:02:52,920::API::1024::vds::(fenceNode)
fenceNode
(addr=192.168.10.103,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=,options=)
Thread-1882::DEBUG::2012-09-20 09:02:53,951::API::1050::vds::(fenceNode) rc
1 in agent=fence_ipmilan
ipaddr=192.168.10.103
login=Administrator
option=status
passwd=
 out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
Failed
 err

engine.log:
2012-09-20 15:02:54,034 INFO  [org.ovirt.engine.core.bll.FencingExecutor]
(ajp--0.0.0.0-8009-5) Executing Status Power Management command, Proxy
Host:hyper1.ovirt.com, Agent:ipmilan, Target Host:, Management
IP:192.168.10.103, User:Administrator, Options:lanplus,power_wait=4
2012-09-20 15:02:54,056 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--0.0.0.0-8009-5) START, FenceVdsVDSCommand(vdsId =
0a268762-02d7-11e2-b750-0011856cf23e, targetVdsId =
c57f5aa0-0301-11e2-8c67-0011856cf23e, action = Status, ip = 192.168.10.103,
port = , type = ipmilan, user = Administrator, password = **, options =
'lanplus,power_wait=4'), log id: 5821013b

 BindingXMLRPC.py not found on proxy
  host in /usr/share/vdsm. Only BindingXMLRPC.pyc file. Itamar Heim
  ih...@redhat.com написано 14.09.2012 13:46:35:
 
   От: Itamar Heim ih...@redhat.com
   Кому: Darrell Budic darrell.bu...@bigwells.net
   Копия: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru,
   users@ovirt.org
   Дата: 14.09.2012 13:46
   Тема: Re: [Users] HA: Re: HA: Re: HP Integrated Lights Out 3
  
   On 09/14/2012 02:32 AM, Darrell Budic wrote:
That fix worked for me (ipmilan wise, anyway. Still no go on ilo,
but we
knew that, right?). Thanks Itamar!
   
Dmitriy, make sure you do this to all your host nodes, it may run
the
test from any of them. You'll also want to be sure you delete
/usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the
compiled
python is likely to still get used. Finally, I did need to
restart vdsmd
on all my nodes, service vdsmd restart on my Centos 6.3 system.
Glad
to know you can do that without causing problems for running vms.
   
I did notice that the ovirt management GUI still shows 3 Alerts
in the
alert area, and they are all Power Management test failed
errors dated
from the first time their particular node was added to the
cluster. This
is even after restarting a vdsmd again and seeing Host xxx power
management was verified successfully. in the event log.
  
   because the engine doesn't go and run 'test power management' all
   the
   time...
   click edit host, power management tab, click 'test'.
  
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Fatal error during migration

2012-09-20 Thread Dmitriy A Pyryakov


Hello,

I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.

When I try to migrate VM from one host to another, I have an error:
Migration failed due to Error: Fatal error during migration.

vdsm.log:
Thread-3797::DEBUG::2012-09-20
09:42:56,439::BindingXMLRPC::859::vds::(wrapper) client
[192.168.10.10]::call vmMigrate with ({'src': '192.168.10.13', 'dst':
'192.168.10.12:54321', 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',
'method': 'online'},) {} flowID [180ad979]
Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::(migrate)
{'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId':
'2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
Thread-3798::DEBUG::2012-09-20
09:42:56,441::vm::122::vm.Vm::(_setupVdsConnection)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Destination server is:
192.168.10.12:54321
Thread-3797::DEBUG::2012-09-20
09:42:56,441::BindingXMLRPC::865::vds::(wrapper) return vmMigrate with
{'status': {'message': 'Migration process starting', 'code': 0}}
Thread-3798::DEBUG::2012-09-20
09:42:56,441::vm::124::vm.Vm::(_setupVdsConnection)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Initiating connection with
destination
Thread-3798::DEBUG::2012-09-20
09:42:56,452::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available
Thread-3798::DEBUG::2012-09-20
09:42:56,457::vm::170::vm.Vm::(_prepareGuest)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration Process begins
Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acquired
Thread-3798::DEBUG::2012-09-20
09:42:56,888::libvirtvm::427::vm.Vm::(_startUnderlyingMigration)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to qemu
+tls://192.168.10.12/system
Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::325::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime thread
started
Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::353::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration monitor
thread
Thread-3798::DEBUG::2012-09-20
09:42:56,903::libvirtvm::340::vm.Vm::(cancel)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::canceling migration downtime
thread
Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::390::vm.Vm::(stop)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::stopping migration monitor
thread
Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::337::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime thread
exiting
Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::(_recover)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation failed: Failed to
connect to remote libvirt URI qemu+tls://192.168.10.12/system
Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 223, in run
  File /usr/share/vdsm/libvirtvm.py, line 451, in
_startUnderlyingMigration
  File /usr/share/vdsm/libvirtvm.py, line 491, in f
  File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py, line
82, in wrapper
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 1034, in
migrateToURI2
libvirtError: operation failed: Failed to connect to remote libvirt URI
qemu+tls://192.168.10.12/system
Thread-3802::DEBUG::2012-09-20
09:42:57,793::BindingXMLRPC::859::vds::(wrapper) client
[192.168.10.10]::call vmGetStats with
('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
Thread-3802::DEBUG::2012-09-20
09:42:57,793::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available
Thread-3802::DEBUG::2012-09-20
09:42:57,794::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with
{'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up',
'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid':
'22047', 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session':
'Unknown', 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash':
'3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable':
'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:0a:08', 'rxDropped':
'0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0',
'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet6'}},
'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'displayType': 'qxl',
'cpuUser': '13.27', 'disks': {u'hdc': {'flushLatency': '0', 'readLatency':
'0', 'writeLatency': '0'}, u'hda': {'readLatency': '6183805',
'apparentsize': '11811160064', 'writeLatency': '0', 'imageID':
'd96d19f6-5a28-4fef-892f-4a04549d4e38', 'flushLatency': '0', 'readRate':
'271.87', 'truesize': '11811160064', 'writeRate': '0.00'}},
'monitorResponse': '0', 'statsAge': '0.77', 'cpuIdle': '86.73',
'elapsedTime': '3941', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [],
'guestIPs': '', 'nice': ''}]}

[Users] HA: Re: Fatal error during migration

2012-09-20 Thread Dmitriy A Pyryakov
Michal Skrivanek michal.skriva...@redhat.com написано 20.09.2012
16:02:11:

 От: Michal Skrivanek michal.skriva...@redhat.com
 Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Копия: users@ovirt.org
 Дата: 20.09.2012 16:02
 Тема: Re: [Users] Fatal error during migration

 Hi,
 well, so what is the other side saying? Maybe some connectivity
 problems between those 2 hosts? firewall?

 Thanks,
 michal

Yes, firewall is not configured properly by default. If I stop it,
migration done.
Thanks.

 On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:

  Hello,
 
  I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
 
  When I try to migrate VM from one host to another, I have an
 error: Migration failed due to Error: Fatal error during migration.
 
  vdsm.log:
  Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::
 859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with
 ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId':
 '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {}
 flowID [180ad979]
  Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::
 (migrate) {'src': '192.168.10.13', 'dst': '192.168.10.12:54321',
 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
  Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::
 (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::Destination server is: 192.168.10.12:54321
  Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::
 865::vds::(wrapper) return vmMigrate with {'status': {'message':
 'Migration process starting', 'code': 0}}
  Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::
 (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::Initiating connection with destination
  Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::
 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::Disk hdc stats not available
  Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::
 (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::migration Process begins
  Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
 vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acquired
  Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::
 427::vm.Vm::(_startUnderlyingMigration)
 vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to
 qemu+tls://192.168.10.12/system
  Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::
 325::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::migration downtime thread started
  Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::
 353::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::starting migration monitor thread
  Thread-3798::DEBUG::2012-09-20 09:42:56,903::libvirtvm::
 340::vm.Vm::(cancel) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::canceling migration downtime thread
  Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
 390::vm.Vm::(stop) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::stopping migration monitor thread
  Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
 337::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::migration downtime thread exiting
  Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::
 (_recover) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation
 failed: Failed to connect to remote libvirt URI qemu+tls://192.168.
 10.12/system
  Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)
 vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
  Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 223, in run
  File /usr/share/vdsm/libvirtvm.py, line 451, in
_startUnderlyingMigration
  File /usr/share/vdsm/libvirtvm.py, line 491, in f
  File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py,
 line 82, in wrapper
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 1034,
 in migrateToURI2
  libvirtError: operation failed: Failed to connect to remote
 libvirt URI qemu+tls://192.168.10.12/system
 
  Thread-3802::DEBUG::2012-09-20 09:42:57,793::BindingXMLRPC::
 859::vds::(wrapper) client [192.168.10.10]::call vmGetStats with
 ('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
  Thread-3802::DEBUG::2012-09-20 09:42:57,793::libvirtvm::
 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
 fc2aeeae2e86`::Disk hdc stats not available
  Thread-3802::DEBUG::2012-09-20 09:42:57,794::BindingXMLRPC::
 865::vds::(wrapper) return vmGetStats with {'status': {'message':
 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username':
 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '22047',
 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session':
 'Unknown', 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash':
 '3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '',
 'kvmEnable': 'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:
 0a:08', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0

Re: [Users] HP Integrated Lights Out 3

2012-09-20 Thread Dmitriy A Pyryakov
Eli Mesika emes...@redhat.com написано 20.09.2012 15:58:58:

 От: Eli Mesika emes...@redhat.com
 Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Копия: Itamar Heim ih...@redhat.com, users@ovirt.org, Roy Golan
 rgo...@redhat.com
 Дата: 20.09.2012 15:59
 Тема: Re: [Users] HP Integrated Lights Out 3



 - Original Message -
  From: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  To: Eli Mesika emes...@redhat.com
  Cc: Itamar Heim ih...@redhat.com, users@ovirt.org
  Sent: Thursday, September 20, 2012 12:05:58 PM
  Subject: Re: [Users] HP Integrated Lights Out 3
 
 
 
 
 
  Eli Mesika emes...@redhat.com написано 20.09.2012 14:55:41:  От:
  Eli Mesika emes...@redhat.com  Кому: Dmitriy A Pyryakov
  dpyrya...@ekb.beeline.ru
   Копия: users@ovirt.org, Itamar Heim ih...@redhat.com
   Дата: 20.09.2012 14:55
   Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
   3
  
  
  
   - Original Message -
From: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
To: Itamar Heim ih...@redhat.com
Cc: users@ovirt.org
Sent: Thursday, September 20, 2012 9:59:34 AM
Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
3
   
   
   
   
   
   
I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,
second -2.5.1-1.0.fc17). SPM on 2.5.0-2.0.fc17. ilo3 don't work.
In
vdsm.log now options presented.
  
   Can you paste here the call to fenceNode from the vdsm.log, thanks
  Of course,
 
  vdsm.log
  Thread-1882::DEBUG::2012-09-20
  09:02:52,920::API::1024::vds::(fenceNode)
  fenceNode(addr=192.168.10.

103,port=,agent=ipmilan,user=Administrator,passwd=,action=status,secure=,options=)


 See, here in the PM Status command , options are empty in VDSM

  Thread-1882::DEBUG::2012-09-20
  09:02:53,951::API::1050::vds::(fenceNode) rc 1 in
  agent=fence_ipmilan
  ipaddr=192.168.10.103
  login=Administrator
  option=status
  passwd=
  out Getting status of IPMI:192.168.10.103...Chassis power = Unknown
  Failed
  err
 
  engine.log:
  2012-09-20 15:02:54,034 INFO
  [org.ovirt.engine.core.bll.FencingExecutor] (ajp--0.0.0.0-8009-5)
  Executing Status Power Management command, Proxy
  Host:hyper1.ovirt.com, Agent:ipmilan, Target Host:, Management
  IP:192.168.10.103, User:Administrator, Options:lanplus,power_wait=4
  2012-09-20 15:02:54,056 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
  (ajp--0.0.0.0-8009-5) START, FenceVdsVDSCommand(vdsId =
  0a268762-02d7-11e2-b750-0011856cf23e, targetVdsId =
  c57f5aa0-0301-11e2-8c67-0011856cf23e, action = Status, ip =
  192.168.10.103, port = , type = ipmilan, user = Administrator,
  password = **, options = 'lanplus,power_wait=4'), log id:
  5821013b

 While we still see that engine sends those options correctly.
 CCing Roy
 Roy, it seems connected to the bug you had resolved but Dmitriy
 claims to have the right vdsm with the fix , any ideas ?

I can't apply vdsm fix on oVirt nodes (because I don't
have /usr/share/vdsm/BindingXMLRPC.py file). I can do it on FC17 hosts
only.



 
   BindingXMLRPC.py not found on proxy
host in /usr/share/vdsm. Only BindingXMLRPC.pyc file. Itamar Heim
ih...@redhat.com написано 14.09.2012 13:46:35:
   
 От: Itamar Heim ih...@redhat.com
 Кому: Darrell Budic darrell.bu...@bigwells.net
 Копия: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru,
 users@ovirt.org
 Дата: 14.09.2012 13:46
 Тема: Re: [Users] HA: Re: HA: Re: HP Integrated Lights Out 3

 On 09/14/2012 02:32 AM, Darrell Budic wrote:
  That fix worked for me (ipmilan wise, anyway. Still no go on
  ilo,
  but we
  knew that, right?). Thanks Itamar!
 
  Dmitriy, make sure you do this to all your host nodes, it may
  run
  the
  test from any of them. You'll also want to be sure you delete
  /usr/share/vdsm/BindingXMLRPC.pyc and .pyo, otherwise the
  compiled
  python is likely to still get used. Finally, I did need to
  restart vdsmd
  on all my nodes, service vdsmd restart on my Centos 6.3
  system.
  Glad
  to know you can do that without causing problems for running
  vms.
 
  I did notice that the ovirt management GUI still shows 3
  Alerts
  in the
  alert area, and they are all Power Management test failed
  errors dated
  from the first time their particular node was added to the
  cluster. This
  is even after restarting a vdsmd again and seeing Host xxx
  power
  management was verified successfully. in the event log.

 because the engine doesn't go and run 'test power management'
 all
 the
 time...
 click edit host, power management tab, click 'test'.

   
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
   
 ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman

[Users] HA: Re: HP Integrated Lights Out 3

2012-09-20 Thread Dmitriy A Pyryakov
Itamar Heim ih...@redhat.com написано 20.09.2012 16:01:54:

 От: Itamar Heim ih...@redhat.com
 Кому: Eli Mesika emes...@redhat.com
 Копия: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru,
 users@ovirt.org, Roy Golan rgo...@redhat.com, Mike Burns
mbu...@redhat.com
 Дата: 20.09.2012 16:02
 Тема: Re: [Users] HP Integrated Lights Out 3

 On 09/20/2012 12:58 PM, Eli Mesika wrote:
 
 
  - Original Message -
  From: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  To: Eli Mesika emes...@redhat.com
  Cc: Itamar Heim ih...@redhat.com, users@ovirt.org
  Sent: Thursday, September 20, 2012 12:05:58 PM
  Subject: Re: [Users] HP Integrated Lights Out 3
 
 
 
 
 
  Eli Mesika emes...@redhat.com написано 20.09.2012 14:55:41:  От:
  Eli Mesika emes...@redhat.com  Кому: Dmitriy A Pyryakov
  dpyrya...@ekb.beeline.ru
  Копия: users@ovirt.org, Itamar Heim ih...@redhat.com
  Дата: 20.09.2012 14:55
  Тема: Re: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
  3
 
 
 
  - Original Message -
  From: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
  To: Itamar Heim ih...@redhat.com
  Cc: users@ovirt.org
  Sent: Thursday, September 20, 2012 9:59:34 AM
  Subject: [Users] HA: Re: HA: Re: HA: Re: HP Integrated Lights Out
  3
 
 
 
 
 
 
  I change Fedora 17 hosts to ovirt nodes (first - 2.5.0-2.0.fc17,

 please note editing a file on an ovirt node requires you to persist it,
 or it will be lost in next boot.
 mike can explain this better than me.

What can I do to save my configuration changes at boot time?___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] HA: Re: Fatal error during migration

2012-09-20 Thread Dmitriy A Pyryakov
Michal Skrivanek michal.skriva...@redhat.com написано 20.09.2012
16:13:16:

 От: Michal Skrivanek michal.skriva...@redhat.com
 Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
 Копия: users@ovirt.org
 Дата: 20.09.2012 16:13
 Тема: Re: [Users] Fatal error during migration


 On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:

  Michal Skrivanek michal.skriva...@redhat.com написано
20.09.201216:02:11:
 
   От: Michal Skrivanek michal.skriva...@redhat.com
   Кому: Dmitriy A Pyryakov dpyrya...@ekb.beeline.ru
   Копия: users@ovirt.org
   Дата: 20.09.2012 16:02
   Тема: Re: [Users] Fatal error during migration
  
   Hi,
   well, so what is the other side saying? Maybe some connectivity
   problems between those 2 hosts? firewall?
  
   Thanks,
   michal
 
  Yes, firewall is not configured properly by default. If I stop it,
 migration done.
  Thanks.
 The default is supposed to be:

 # oVirt default firewall configuration. Automatically generated by
 vdsm bootstrap script.
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [0:0]
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 -A INPUT -p icmp -j ACCEPT
 -A INPUT -i lo -j ACCEPT
 # vdsm
 -A INPUT -p tcp --dport 54321 -j ACCEPT
 # libvirt tls
 -A INPUT -p tcp --dport 16514 -j ACCEPT
 # SSH
 -A INPUT -p tcp --dport 22 -j ACCEPT
 # guest consoles
 -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
 # migration
 -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
 # snmp
 -A INPUT -p udp --dport 161 -j ACCEPT
 # Reject any other input traffic
 -A INPUT -j REJECT --reject-with icmp-host-prohibited
 -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with
 icmp-host-prohibited
 COMMIT

my default is:

# cat /etc/sysconfig/iptables
# oVirt automatically generated firewall configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
#vdsm
-A INPUT -p tcp --dport 54321 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
# snmp
-A INPUT -p udp --dport 161 -j ACCEPT
#
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with
icmp-host-prohibited
COMMIT


 did you change it manually or is the default missing anything?

default missing libvirt tls field.

 thanks,
 michal
   On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:
  
Hello,
   
I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
   
When I try to migrate VM from one host to another, I have an
   error: Migration failed due to Error: Fatal error during migration.
   
vdsm.log:
Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::
   859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with
   ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId':
   '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {}
   flowID [180ad979]
Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::
   (migrate) {'src': '192.168.10.13', 'dst': '192.168.10.12:54321',
   'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::
   (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
   fc2aeeae2e86`::Destination server is: 192.168.10.12:54321
Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::
   865::vds::(wrapper) return vmMigrate with {'status': {'message':
   'Migration process starting', 'code': 0}}
Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::
   (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
   fc2aeeae2e86`::Initiating connection with destination
Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::
   240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
   fc2aeeae2e86`::Disk hdc stats not available
Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::
   (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-
   fc2aeeae2e86`::migration Process begins
Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
   vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore
acquired
Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::
   427::vm.Vm::(_startUnderlyingMigration)
   vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to
   qemu+tls://192.168.10.12/system
Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::
   325::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
   fc2aeeae2e86`::migration downtime thread started
Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::
   353::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
   fc2aeeae2e86`::starting migration monitor thread
Thread-3798::DEBUG::2012-09-20 09:42:56,903::libvirtvm::
   340::vm.Vm::(cancel) vmId=`2bf3e6eb-49e4-42c7-8188