Re: [Users] RFE: A manual way of saying that only hostA in a DC shall be used as proxy for power commands

2012-07-30 Thread Karli Sjöberg

30 jul 2012 kl. 11.01 skrev Itamar Heim:

On 07/30/2012 08:56 AM, Karli Sjöberg wrote:

28 jul 2012 kl. 14.11 skrev Moti Asayag:

On 07/26/2012 02:53 PM, Karli Sjöberg wrote:
Hi,

In my DC, I have three hosts added:

hostA
hostB
hostC

I want a way to force only to use hostA as a proxy for power commands.

The algorithm of selection a host to act as a proxy for PM commands is
quite naive: any host from the system with status UP.

You can see how it is being selected in FencingExecutor.FindVdsToFence()
from
ovirt-engine/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/FencingExecutor.java

There is no other algorithm for the selection at the moment.

How would you handle a case in which hostA isn't responsive ? Wouldn't
you prefer trying to perform the fencing using other available host ?


Let me explain a little to make you better understand my reasoning
behind this configuration.

We work with segmented, separated networks. One network for public
access, one for storage traffic, one for management and so on. That
means that if the nodes themselves have to do their own
power-management, the nodes would require three interfaces each, and the
metal we are using for hosts just don´t have that. But if we can use the
engine to do that, the hosts would only require two interfaces, which
most 1U servers are equipped with as standard (plus one
iLO/IPMI/whatev), so we can use them as hosts without issue. Then the
backend has one extra interface that it can use to communicate over the
power management network to the respective service processor with.

Is there a better way to achieve what we are aiming for? Ideally, I
would like to set up the two NICs in a bond and create VLAN-interfaces
on top of that bond. That way, I can have as many virtual interfaces as
I want without having more than two physical NICs, but I haven´t been
able to find a good HOWTO explaining the process.


I think there is a difference between:
1. allowing engine to fence
2. allowing to choose fencing host per cluster (or per host)

it sounds like you actually want #1, but can live with #2, by installing
the engine as a host as well.

Exactly, I can live with #2, as I have the engine added as hostA in my DC


Med Vänliga Hälsningar
---
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.semailto:karli.sjob...@adm.slu.se

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] RFE: A manual way of saying that only hostA in a DC shall be used as proxy for power commands

2012-07-30 Thread Itamar Heim

On 07/30/2012 12:03 PM, Karli Sjöberg wrote:


30 jul 2012 kl. 11.01 skrev Itamar Heim:


On 07/30/2012 08:56 AM, Karli Sjöberg wrote:


28 jul 2012 kl. 14.11 skrev Moti Asayag:


On 07/26/2012 02:53 PM, Karli Sjöberg wrote:

Hi,

In my DC, I have three hosts added:

hostA
hostB
hostC

I want a way to force only to use hostA as a proxy for power commands.


The algorithm of selection a host to act as a proxy for PM commands is
quite naive: any host from the system with status UP.

You can see how it is being selected in FencingExecutor.FindVdsToFence()
from
ovirt-engine/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/FencingExecutor.java

There is no other algorithm for the selection at the moment.

How would you handle a case in which hostA isn't responsive ? Wouldn't
you prefer trying to perform the fencing using other available host ?



Let me explain a little to make you better understand my reasoning
behind this configuration.

We work with segmented, separated networks. One network for public
access, one for storage traffic, one for management and so on. That
means that if the nodes themselves have to do their own
power-management, the nodes would require three interfaces each, and the
metal we are using for hosts just don´t have that. But if we can use the
engine to do that, the hosts would only require two interfaces, which
most 1U servers are equipped with as standard (plus one
iLO/IPMI/whatev), so we can use them as hosts without issue. Then the
backend has one extra interface that it can use to communicate over the
power management network to the respective service processor with.

Is there a better way to achieve what we are aiming for? Ideally, I
would like to set up the two NICs in a bond and create VLAN-interfaces
on top of that bond. That way, I can have as many virtual interfaces as
I want without having more than two physical NICs, but I haven´t been
able to find a good HOWTO explaining the process.



I think there is a difference between:
1. allowing engine to fence
2. allowing to choose fencing host per cluster (or per host)

it sounds like you actually want #1, but can live with #2, by installing
the engine as a host as well.


Exactly, I can live with #2, as I have the engine added as hostA in my DC


well, the question is if choosing another host to use for fencing 
would/should be limited to hosts from same DC, then engine can only be 
used to fence one DC.
also, for any host other than engine, question is what to do if it is 
down...

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] RFE: A manual way of saying that only hostA in a DC shall be used as proxy for power commands

2012-07-30 Thread Karli Sjöberg

30 jul 2012 kl. 12.26 skrev Itamar Heim:

On 07/30/2012 12:03 PM, Karli Sjöberg wrote:

30 jul 2012 kl. 11.01 skrev Itamar Heim:

On 07/30/2012 08:56 AM, Karli Sjöberg wrote:

28 jul 2012 kl. 14.11 skrev Moti Asayag:

On 07/26/2012 02:53 PM, Karli Sjöberg wrote:
Hi,

In my DC, I have three hosts added:

hostA
hostB
hostC

I want a way to force only to use hostA as a proxy for power commands.

The algorithm of selection a host to act as a proxy for PM commands is
quite naive: any host from the system with status UP.

You can see how it is being selected in FencingExecutor.FindVdsToFence()
from
ovirt-engine/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/FencingExecutor.java

There is no other algorithm for the selection at the moment.

How would you handle a case in which hostA isn't responsive ? Wouldn't
you prefer trying to perform the fencing using other available host ?


Let me explain a little to make you better understand my reasoning
behind this configuration.

We work with segmented, separated networks. One network for public
access, one for storage traffic, one for management and so on. That
means that if the nodes themselves have to do their own
power-management, the nodes would require three interfaces each, and the
metal we are using for hosts just don´t have that. But if we can use the
engine to do that, the hosts would only require two interfaces, which
most 1U servers are equipped with as standard (plus one
iLO/IPMI/whatev), so we can use them as hosts without issue. Then the
backend has one extra interface that it can use to communicate over the
power management network to the respective service processor with.

Is there a better way to achieve what we are aiming for? Ideally, I
would like to set up the two NICs in a bond and create VLAN-interfaces
on top of that bond. That way, I can have as many virtual interfaces as
I want without having more than two physical NICs, but I haven´t been
able to find a good HOWTO explaining the process.


I think there is a difference between:
1. allowing engine to fence
2. allowing to choose fencing host per cluster (or per host)

it sounds like you actually want #1, but can live with #2, by installing
the engine as a host as well.

Exactly, I can live with #2, as I have the engine added as hostA in my DC

well, the question is if choosing another host to use for fencing
would/should be limited to hosts from same DC, then engine can only be
used to fence one DC.

I´m quoting you here:
1. power management is DC wide, not cluster.

So this wouldn´t be any different from it´s current state.


also, for any host other than engine, question is what to do if it is
down...



Med Vänliga Hälsningar
---
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.semailto:karli.sjob...@adm.slu.se

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] RFE: A manual way of saying that only hostA in a DC shall be used as proxy for power commands

2012-07-30 Thread Itamar Heim

On 07/30/2012 04:25 PM, Karli Sjöberg wrote:


30 jul 2012 kl. 12.26 skrev Itamar Heim:


On 07/30/2012 12:03 PM, Karli Sjöberg wrote:


30 jul 2012 kl. 11.01 skrev Itamar Heim:


On 07/30/2012 08:56 AM, Karli Sjöberg wrote:


28 jul 2012 kl. 14.11 skrev Moti Asayag:


On 07/26/2012 02:53 PM, Karli Sjöberg wrote:

Hi,

In my DC, I have three hosts added:

hostA
hostB
hostC

I want a way to force only to use hostA as a proxy for power
commands.


The algorithm of selection a host to act as a proxy for PM commands is
quite naive: any host from the system with status UP.

You can see how it is being selected in
FencingExecutor.FindVdsToFence()
from
ovirt-engine/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/FencingExecutor.java

There is no other algorithm for the selection at the moment.

How would you handle a case in which hostA isn't responsive ? Wouldn't
you prefer trying to perform the fencing using other available host ?



Let me explain a little to make you better understand my reasoning
behind this configuration.

We work with segmented, separated networks. One network for public
access, one for storage traffic, one for management and so on. That
means that if the nodes themselves have to do their own
power-management, the nodes would require three interfaces each,
and the
metal we are using for hosts just don´t have that. But if we can
use the
engine to do that, the hosts would only require two interfaces, which
most 1U servers are equipped with as standard (plus one
iLO/IPMI/whatev), so we can use them as hosts without issue. Then the
backend has one extra interface that it can use to communicate over the
power management network to the respective service processor with.

Is there a better way to achieve what we are aiming for? Ideally, I
would like to set up the two NICs in a bond and create VLAN-interfaces
on top of that bond. That way, I can have as many virtual interfaces as
I want without having more than two physical NICs, but I haven´t been
able to find a good HOWTO explaining the process.



I think there is a difference between:
1. allowing engine to fence
2. allowing to choose fencing host per cluster (or per host)

it sounds like you actually want #1, but can live with #2, by installing
the engine as a host as well.


Exactly, I can live with #2, as I have the engine added as hostA in my DC


well, the question is if choosing another host to use for fencing
would/should be limited to hosts from same DC, then engine can only be
used to fence one DC.


I´m quoting you here:
1. power management is DC wide, not cluster.

So this wouldn´t be any different from it´s current state.


true, but if you have multiple DCs, engine as a host can be used to 
fence only one DC.

while if engine is 'special', it can be used to fence in all DCs





also, for any host other than engine, question is what to do if it is
down...




Med Vänliga Hälsningar
---
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.se mailto:karli.sjob...@adm.slu.se




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] F17 vdsm bootstrap_complete trouble

2012-07-30 Thread Ryan Harper
I'm having trouble getting a F17 system[1] added to engine[2].  The symtoms
are in the engine UI, it says the install falls.  I'm using the latest
rpms[3]

On the end-point, the bootstrap log shows success.

However, when I attempt to test the vdsm install with:

vdsClient -s 0 getVdsCaps

I get a nice ssl error:

[root@hungerforce tmp]# vdsClient -s 0 getVdsCaps
Traceback (most recent call last):
  File /usr/share/vdsm/vdsClient.py, line 2275, in module
code, message = commands[command][0](commandArgs)
  File /usr/share/vdsm/vdsClient.py, line 403, in do_getCap
return self.ExecAndExit(self.s.getVdsCapabilities())
  File /usr/lib64/python2.7/xmlrpclib.py, line 1224, in __call__
return self.__send(self.__name, args)
  File /usr/lib64/python2.7/xmlrpclib.py, line 1578, in __request
verbose=self.__verbose
  File /usr/lib64/python2.7/xmlrpclib.py, line 1264, in request
return self.single_request(host, handler, request_body, verbose)
  File /usr/lib64/python2.7/xmlrpclib.py, line 1292, in single_request
self.send_content(h, request_body)
  File /usr/lib64/python2.7/xmlrpclib.py, line 1439, in send_content
connection.endheaders(request_body)
  File /usr/lib64/python2.7/httplib.py, line 954, in endheaders
self._send_output(message_body)
  File /usr/lib64/python2.7/httplib.py, line 814, in _send_output
self.send(msg)
  File /usr/lib64/python2.7/httplib.py, line 776, in send
self.connect()
  File /usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, line 98, 
in connect
cert_reqs=self.cert_reqs)
  File /usr/lib64/python2.7/ssl.py, line 381, in wrap_socket
ciphers=ciphers)
  File /usr/lib64/python2.7/ssl.py, line 141, in __init__
ciphers)
SSLError: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate 
routines:X509_load_cert_crl_file:system lib


This problem is because not all of the ssl certs for vdsm are present.
On a working host:

[root@ichigo-dom226 tmp]# find /etc/pki/vdsm -type f
/etc/pki/vdsm/certs/cacert.pem
/etc/pki/vdsm/certs/vdsmcert.pem
/etc/pki/vdsm/keys/libvirt_password
/etc/pki/vdsm/keys/dh.pem
/etc/pki/vdsm/keys/vdsmkey.pem

On the host with the error:

[root@hungerforce tmp]# find /etc/pki/vdsm -type f
/etc/pki/vdsm/keys/dh.pem
/etc/pki/vdsm/keys/libvirt_password
/etc/pki/vdsm/keys/vdsmkey.pem


As it turns out: 
/etc/pki/vdsm/certs/cacert.pem
/etc/pki/vdsm/certs/vdsmcert.pem

These files are generated from:

/usr/libexec/vdsm/vdsm-gencerts.sh


which is invoked by:  deployUtils.instCert()

which is called by: vds_bootstrap_complete.py


So... the question is:  why isn't vds_bootstrap_complete.py getting
invoked?


Also, if I re-run the vdsm-gencerts.sh and validate my certificates I
can get vdsm to work properly on the host (vdsClient -s works)... then
if I go to engine and attempt to Activate, it just say the host is
non-responsive... re-installing re-breaks vdsm since it doesn't generate
the SSL certs.




1. [root@hungerforce tmp]# rpm -qa | egrep (vdsm|libvirt)
vdsm-4.10.0-5.fc17.x86_64
vdsm-python-4.10.0-5.fc17.x86_64
libvirt-daemon-config-nwfilter-0.9.11.4-3.fc17.x86_64
libvirt-daemon-0.9.11.4-3.fc17.x86_64
libvirt-lock-sanlock-0.9.11.4-3.fc17.x86_64
vdsm-xmlrpc-4.10.0-5.fc17.noarch
vdsm-cli-4.10.0-5.fc17.noarch
libvirt-0.9.11.4-3.fc17.x86_64
libvirt-daemon-config-network-0.9.11.4-3.fc17.x86_64
libvirt-client-0.9.11.4-3.fc17.x86_64
libvirt-python-0.9.11.4-3.fc17.x86_64


2. [root@bebop ~]# rpm -qa | egrep (ovirt-engine|vdsm)  
ovirt-engine-dbscripts-3.1.0-1.fc17.noarch
ovirt-engine-userportal-3.1.0-1.fc17.noarch
ovirt-engine-genericapi-3.1.0-1.fc17.noarch
ovirt-engine-cli-3.1.0.6-1.fc17.noarch
ovirt-engine-backend-3.1.0-1.fc17.noarch
ovirt-engine-notification-service-3.1.0-1.fc17.noarch
ovirt-engine-3.1.0-1.fc17.noarch
vdsm-bootstrap-4.10.0-5.fc17.noarch
ovirt-engine-webadmin-portal-3.1.0-1.fc17.noarch
ovirt-engine-restapi-3.1.0-1.fc17.noarch
ovirt-engine-config-3.1.0-1.fc17.noarch
ovirt-engine-sdk-3.1.0.4-1.fc17.noarch
ovirt-engine-tools-common-3.1.0-1.fc17.noarch
ovirt-engine-setup-3.1.0-1.fc17.noarch


3. http://ovirt.org/releases/3.1/rpm/Fedora/17/

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Can't add a host to a 2.2 compatibilty level cluster

2012-07-30 Thread Nicholas Kesick




I am unable to add a host to a 2.2 compatibilty level cluster. I'm trying to 
add a host to that because it is a Pentium D system (which supports 
virtualization and 64-bit), and NetBurst is only available in 2.2 w/o a 
database edit.Webadmin events shows Host oVirtNode22 is compatible with 
versions (3.0,3.1) and cannot join Cluster Legacy22 which is set to version 
2.2. I don't see an area to set host compatibility level, so any suggestions? 
When I try to add it to a 3.0/3.1 cluster, it reports missing cpu flags: 
model_Conroe Thanks,- Nick  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] RFE: A manual way of saying that only hostA in a DC shall be used as proxy for power commands

2012-07-30 Thread Karli Sjöberg

30 jul 2012 kl. 17.18 skrev Itamar Heim:

On 07/30/2012 04:25 PM, Karli Sjöberg wrote:

30 jul 2012 kl. 12.26 skrev Itamar Heim:

On 07/30/2012 12:03 PM, Karli Sjöberg wrote:

30 jul 2012 kl. 11.01 skrev Itamar Heim:

On 07/30/2012 08:56 AM, Karli Sjöberg wrote:

28 jul 2012 kl. 14.11 skrev Moti Asayag:

On 07/26/2012 02:53 PM, Karli Sjöberg wrote:
Hi,

In my DC, I have three hosts added:

hostA
hostB
hostC

I want a way to force only to use hostA as a proxy for power
commands.

The algorithm of selection a host to act as a proxy for PM commands is
quite naive: any host from the system with status UP.

You can see how it is being selected in
FencingExecutor.FindVdsToFence()
from
ovirt-engine/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/FencingExecutor.java

There is no other algorithm for the selection at the moment.

How would you handle a case in which hostA isn't responsive ? Wouldn't
you prefer trying to perform the fencing using other available host ?


Let me explain a little to make you better understand my reasoning
behind this configuration.

We work with segmented, separated networks. One network for public
access, one for storage traffic, one for management and so on. That
means that if the nodes themselves have to do their own
power-management, the nodes would require three interfaces each,
and the
metal we are using for hosts just don´t have that. But if we can
use the
engine to do that, the hosts would only require two interfaces, which
most 1U servers are equipped with as standard (plus one
iLO/IPMI/whatev), so we can use them as hosts without issue. Then the
backend has one extra interface that it can use to communicate over the
power management network to the respective service processor with.

Is there a better way to achieve what we are aiming for? Ideally, I
would like to set up the two NICs in a bond and create VLAN-interfaces
on top of that bond. That way, I can have as many virtual interfaces as
I want without having more than two physical NICs, but I haven´t been
able to find a good HOWTO explaining the process.


I think there is a difference between:
1. allowing engine to fence
2. allowing to choose fencing host per cluster (or per host)

it sounds like you actually want #1, but can live with #2, by installing
the engine as a host as well.

Exactly, I can live with #2, as I have the engine added as hostA in my DC

well, the question is if choosing another host to use for fencing
would/should be limited to hosts from same DC, then engine can only be
used to fence one DC.

I´m quoting you here:
1. power management is DC wide, not cluster.

So this wouldn´t be any different from it´s current state.

true, but if you have multiple DCs, engine as a host can be used to
fence only one DC.
while if engine is 'special', it can be used to fence in all DCs

OK, so you actually want the engine to be special. Well, thats how VMWare 
vCenter also manages power control so I like that as well. Of course it´s 
always better if engine could fence all DC´s, instead of just one, and it would 
also make it a lot more intuitive to configure, cause you won´t need two hosts 
before the first could verify the other either.  And if the engine goes down, 
you would have bigger issues than power control to worry about;)




also, for any host other than engine, question is what to do if it is
down...



Med Vänliga Hälsningar
---
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.semailto:karli.sjob...@slu.se 
mailto:karli.sjob...@adm.slu.se






Med Vänliga Hälsningar
---
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.semailto:karli.sjob...@adm.slu.se

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Increase storage domain

2012-07-30 Thread Johan Kragsterman
Hi!

Interesting question, I would also be interested in that. LVM would be aware of 
the expansion, I suppose...Did you run a pvs command for LVM to list the size? 
It would be nice if LVM automatically would change the LV size...anyone knows 
how this works...?

Filesystem size is another thing. Filesystem's doesn't exist on the storage 
domain, it is only block storage. Your filesystems only exists on your VM's. I 
suppose you need to run a filesystem tool to expand that, depending on your 
filesystem

-users-boun...@ovirt.org skrev: -
Till: users@ovirt.org
Från: Ricardo Esteves 
Sänt av: users-boun...@ovirt.org
Datum: 2012.07.30 18:33
Ärende: [Users] Increase storage domain

   Hi,
 
 I've increased the LUN i use as iSCSI storage domain on my storage, but oVirt 
still sees the LUN with the old size.
 
 How do i refresh the LUN size and how to increase the filesystem of the 
storage domain?
 
 Best regards,
 Ricardo Esteves. 
   
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users