Re: [ovirt-users] Dell DRAC 8

2015-03-06 Thread Patrick Russell
I ended up using the drac5 fence, secure, and cmd_prompt=

This works if I connect to the individual idrac for each blade. I never did get 
it working from the CMC level, but this way is acceptable.

-Patrick


From: Nathanaël Blanchet
Date: Friday, March 6, 2015 at 5:11 AM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: Re: [ovirt-users] Dell DRAC 8

Hi,

Did you try with this options? It is intended for the second generation of 
idrac7 but, it may work for idrac8.

- Original Message -
From: Oved Ourfali oourf...@redhat.commailto:oourf...@redhat.com
To: Nathanaël Blanchet blanc...@abes.frmailto:blanc...@abes.fr
Cc: users@ovirt.orgmailto:users@ovirt.org, Eli Mesika 
emes...@redhat.commailto:emes...@redhat.com
Sent: Monday, February 2, 2015 7:04:05 PM
Subject: Re: [ovirt-users] power management test fails while ipmi command
successes

We've recently pushed a patch to have lanplus as a default option for drac7.
Until then you can set this option explicitly through the additional options
in the power management configuration tab in the edit/add host dialog.
Please check whether it helps.

CC-ing Eli, just in case I've missed something.
Please make sure that you have the following in the UI options field (on the 
Edit dialog PM TAB)

privlvl=OPERATOR,lanplus,delay=10


Regards,
Oved

On Feb 2, 2015 6:10 PM, Nathanaël Blanchet 
blanc...@abes.frmailto:blanc...@abes.fr wrote:
Hi all,

I've just installed two new dell r630 hosts and every thing is alright
with vdsm registration, but power management test always fails with ipmi
or drac7 item. When I start the manual ipmi command like this ipmitool
-I lanplus -H 10.34.20.45 -U root chassis power status, it always
success with power chassis is on.
All of my other dell servers (r710, r620) success on power management test.
Is it a known bug?
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Le 06/03/2015 03:54, Patrick Russell a écrit :
Looks like it’s just the CMC, I can use power management on the individual sled 
DRAC’s using the drac5 fence agent no problem.

-Patrick


From: Volusion Inc
Date: Thursday, March 5, 2015 at 8:22 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: [ovirt-users] Dell DRAC 8

Anyone having success with fencing and DRAC 8 via CMC? We just received a 
couple Dell FX2 chassis and we’re having trouble getting the fencing agents to 
work on these. It is a CMC setup similar to the dell blade chassis, but it DRAC 
version 8.

-Patrick



___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dell DRAC 8

2015-03-05 Thread Patrick Russell
Looks like it’s just the CMC, I can use power management on the individual sled 
DRAC’s using the drac5 fence agent no problem.

-Patrick


From: Volusion Inc
Date: Thursday, March 5, 2015 at 8:22 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: [ovirt-users] Dell DRAC 8

Anyone having success with fencing and DRAC 8 via CMC? We just received a 
couple Dell FX2 chassis and we’re having trouble getting the fencing agents to 
work on these. It is a CMC setup similar to the dell blade chassis, but it DRAC 
version 8.

-Patrick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Dell DRAC 8

2015-03-05 Thread Patrick Russell
Anyone having success with fencing and DRAC 8 via CMC? We just received a 
couple Dell FX2 chassis and we’re having trouble getting the fencing agents to 
work on these. It is a CMC setup similar to the dell blade chassis, but it DRAC 
version 8.

-Patrick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.5.2 live merge

2015-04-30 Thread Patrick Russell
  
[org.ovirt.engine.core.bll.RemoveSnapshotCommandCallback] 
(DefaultQuartzScheduler_Worker-73) [5c8285eb] Waiting on Live Merge child 
commands to complete
2015-04-30 19:50:39,710 ERROR 
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] 
(DefaultQuartzScheduler_Worker-73) [598dbb1e] Failed child command status for 
step MERGE
2015-04-30 19:50:49,720 ERROR 
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] 
(DefaultQuartzScheduler_Worker-36) [5c8285eb] Merging of snapshot 
470b525e-fb30-4956-a702-f4c6389bcb60 images 
3f90ced5-58ea-43cc-955a-559e8d89fbb2..87f26ef2-6a84-4f7c-9771-be76650c703c 
failed. Images have been marked illegal and can no longer be previewed or 
reverted to. Please retry Live Merge on the snapshot to complete the operation.
2015-04-30 19:50:49,725 INFO  
[org.ovirt.engine.core.bll.RemoveSnapshotCommandCallback] 
(DefaultQuartzScheduler_Worker-36) [4cbab1c6] All Live Merge child commands 
have completed, status FAILED
2015-04-30 19:50:49,734 ERROR 
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] 
(DefaultQuartzScheduler_Worker-36) [598dbb1e] Merging of snapshot 
470b525e-fb30-4956-a702-f4c6389bcb60 images 
7ad1cf7d-f2e6-41d4-a3c4-57458e80c43d..2ca5b861-52a8-4c0f-92a2-a8f0a3ed4129 
failed. Images have been marked illegal and can no longer be previewed or 
reverted to. Please retry Live Merge on the snapshot to complete the operation.
2015-04-30 19:50:59,787 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] 
(DefaultQuartzScheduler_Worker-16) [4cbab1c6] Ending command with failure: 
org.ovirt.engine.core.bll.RemoveSnapshotCommand
2015-04-30 19:50:59,837 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-16) [4cbab1c6] Correlation ID: 4cbab1c6, Call 
Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 
'test_snap1' for VM '3.5.2.wintest1’.


On Apr 30, 2015, at 9:54 AM, Patrick Russell 
patrick_russ...@volusion.commailto:patrick_russ...@volusion.com wrote:

Hi everyone,

We’re not seeing live merge working as of the 3.5.2 update. We’ve tested using 
fibre channel and NFS attached storage. Both throwing the same error code. Are 
other people seeing success with live-merge after the update?

Here’s the environment:

Engine Running on CentOS 6x64 updated to 3.5.2 via yum update (standalone 
physical box, dual socket hex core + hyperthreading, 16GB memory)

# rpm -qa |grep ovirt
ovirt-engine-cli-3.5.0.5-1.el6.noarch
ovirt-engine-3.5.1.1-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.5.2-1.el6.noarch
ovirt-engine-setup-plugin-allinone-3.5.2-1.el6.noarch
ovirt-engine-setup-3.5.2-1.el6.noarch
ovirt-guest-tools-3.5.0-0.5.master.noarch
ovirt-host-deploy-1.3.1-1.el6.noarch
ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.2-1.el6.noarch
ovirt-engine-backend-3.5.1.1-1.el6.noarch
ovirt-engine-userportal-3.5.1.1-1.el6.noarch
ovirt-engine-dbscripts-3.5.1.1-1.el6.noarch
ovirt-engine-tools-3.5.1.1-1.el6.noarch
ovirt-host-deploy-offline-1.3.1-1.el6.x86_64
ovirt-engine-setup-plugin-websocket-proxy-3.5.2-1.el6.noarch
ovirt-engine-websocket-proxy-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-extensions-api-impl-3.5.2-1.el6.noarch
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
ovirt-engine-webadmin-portal-3.5.1.1-1.el6.noarch
ovirt-engine-restapi-3.5.1.1-1.el6.noarch
ovirt-guest-tools-iso-3.5-7.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-lib-3.5.2-1.el6.noarch
ovirt-engine-setup-base-3.5.2-1.el6.noarch
ovirt-release35-003-1.noarch
ovirt-host-deploy-java-1.3.1-1.el6.noarch

Hypervisors are running ovirt-node, upgraded from ISO : 
http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-node/el7-3.5.2/ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso


Here’s a snippet from the logs:

2015-04-29 18:47:16,947 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-2) 
[48eb0b1d] FINISH, MergeVDSCommand, log id: 5121ecc9
2015-04-29 18:47:16,947 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-2) [48eb0b1d] Command org.ovirt.engine.core.bll.MergeCommand 
throw Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge 
failed, code = 52 (Failed with error mergeErr and code 52)
2015-04-29 18:47:16,954 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-2) [48eb0b1d] Transaction rolled-back for command: 
org.ovirt.engine.core.bll.MergeCommand.
2015-04-29 18:47:16,981 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) 
[5495bde7] Failed in MergeVDS method
2015-04-29 18:47:16,982 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) 
[5495bde7] Command org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand 
return value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=52, mMessage=Merge 
failed

[ovirt-users] 3.5.2 live merge

2015-04-30 Thread Patrick Russell
Hi everyone,

We’re not seeing live merge working as of the 3.5.2 update. We’ve tested using 
fibre channel and NFS attached storage. Both throwing the same error code. Are 
other people seeing success with live-merge after the update?

Here’s the environment:

Engine Running on CentOS 6x64 updated to 3.5.2 via yum update (standalone 
physical box, dual socket hex core + hyperthreading, 16GB memory)

# rpm -qa |grep ovirt
ovirt-engine-cli-3.5.0.5-1.el6.noarch
ovirt-engine-3.5.1.1-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.5.2-1.el6.noarch
ovirt-engine-setup-plugin-allinone-3.5.2-1.el6.noarch
ovirt-engine-setup-3.5.2-1.el6.noarch
ovirt-guest-tools-3.5.0-0.5.master.noarch
ovirt-host-deploy-1.3.1-1.el6.noarch
ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.2-1.el6.noarch
ovirt-engine-backend-3.5.1.1-1.el6.noarch
ovirt-engine-userportal-3.5.1.1-1.el6.noarch
ovirt-engine-dbscripts-3.5.1.1-1.el6.noarch
ovirt-engine-tools-3.5.1.1-1.el6.noarch
ovirt-host-deploy-offline-1.3.1-1.el6.x86_64
ovirt-engine-setup-plugin-websocket-proxy-3.5.2-1.el6.noarch
ovirt-engine-websocket-proxy-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-extensions-api-impl-3.5.2-1.el6.noarch
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
ovirt-engine-webadmin-portal-3.5.1.1-1.el6.noarch
ovirt-engine-restapi-3.5.1.1-1.el6.noarch
ovirt-guest-tools-iso-3.5-7.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-lib-3.5.2-1.el6.noarch
ovirt-engine-setup-base-3.5.2-1.el6.noarch
ovirt-release35-003-1.noarch
ovirt-host-deploy-java-1.3.1-1.el6.noarch

Hypervisors are running ovirt-node, upgraded from ISO : 
http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-node/el7-3.5.2/ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso


Here’s a snippet from the logs:

2015-04-29 18:47:16,947 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-2) 
[48eb0b1d] FINISH, MergeVDSCommand, log id: 5121ecc9
2015-04-29 18:47:16,947 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-2) [48eb0b1d] Command org.ovirt.engine.core.bll.MergeCommand 
throw Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge 
failed, code = 52 (Failed with error mergeErr and code 52)
2015-04-29 18:47:16,954 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-2) [48eb0b1d] Transaction rolled-back for command: 
org.ovirt.engine.core.bll.MergeCommand.
2015-04-29 18:47:16,981 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) 
[5495bde7] Failed in MergeVDS method
2015-04-29 18:47:16,982 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) 
[5495bde7] Command org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand 
return value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=52, mMessage=Merge 
failed]]


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.5.2 Hypervisor crash, guest VMs not migrating

2015-04-17 Thread Patrick Russell
If you turn the host back on, do the VM then power up on the host that was 
never down?

If so, we have filed a bug around this in our 3.5.1 environment.

https://bugzilla.redhat.com/show_bug.cgi?id=1192596

-Patrick

 On Apr 16, 2015, at 7:01 PM, Ron V ronv...@abacom.com wrote:
 
 Hello,
 
 I am testing guest VM migration in the event of a host crash, and I am 
 surprised to see that guests that are selected to be highly available do not 
 migrate when a host is forcibly turned off.
 
 I have a 2 host cluster using iSCSI for storage, and when one of the hosts, 
 either the SPM or normal, is forcibly turned off, although the engine sees 
 the host as non-responsive, the VMs that were running on it remain on that 
 crashed host, and a question mark (?) appears next to them.  Other than 
 checking highly available, is there another step that needs to be made for 
 a VM to be restarted on a working host should the host it is running on fail?
 
 Thanks,
 
 R.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster export

2015-04-08 Thread Patrick Russell
We did this from our node 01. The line that contains hostname:/gluster… i just 
sanitized those from our hostnames. It was numbered 1 -3 (ie hostname01 
hostname02 hostname03)

-Patrick

On Apr 8, 2015, at 9:18 PM, Bill Dossett 
bill.doss...@pb.commailto:bill.doss...@pb.com wrote:

Thank you Patrick,  are you doing this on the engine or on the nodes?

Thanks again for the input.

Bill

From: Patrick Russell
Date: Wednesday, 8 April 2015 20:13
To: Bill Dossett
Cc: users@ovirt.orgmailto:users@ovirt.org
Subject: Re: [ovirt-users] gluster export

Hi BIll,
You do need to create the export. Here are the steps we noted from our build. 
I’ll try and find where we got some of the build ideas from.

mkdir -p /gluster/{data,engine,meta,xport}/brick ; mkdir /mnt/lock

for i in data engine meta xport ; do gluster volume create ${i} replica 3 
hostname:/gluster/${i}/brick hostname:/gluster/${i}/brick 
hostname:/gluster/${i}/brick ; done

gluster volume status

for i in data engine meta xport ; do gluster volume set ${i} group virt ; done
for i in data engine meta xport ; do gluster volume set ${i} storage.owner-uid 
36 ; gluster volume set ${i} storage.owner-gid 36 ; done
for i in data engine meta xport ; do gluster volume start ${i} ; done

-Patrick

On Apr 8, 2015, at 8:32 PM, Bill Dossett 
bill.doss...@pb.commailto:bill.doss...@pb.com wrote:

Hi,

Kind of a stupid question, but I couldn’t figure it out today.

I setup a gluster cluster with ovirt-engine.  Created the volume with a couple 
of bricks in a replicating cluster.

I will eventually be using some of this anyway with ovirt Vms, but also 
interested in using the glusterfs and cifs and nfs and hopefull iSCSI block to 
other devices…  but I could find out what the export of the glusterfs   
server:/path   was.  Is this something I have to setup in addition to the 
gluster cluster.  I thought I had read something that you need to set it up, 
but then couldn’t find it with all the different documents and windows I had 
open and it was late.  So hoping someone can point me in the right direction 
for this.

Thank you.

Bill



___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster export

2015-04-08 Thread Patrick Russell
Hi BIll,
You do need to create the export. Here are the steps we noted from our build. 
I’ll try and find where we got some of the build ideas from.

mkdir -p /gluster/{data,engine,meta,xport}/brick ; mkdir /mnt/lock

for i in data engine meta xport ; do gluster volume create ${i} replica 3 
hostname:/gluster/${i}/brick hostname:/gluster/${i}/brick 
hostname:/gluster/${i}/brick ; done

gluster volume status

for i in data engine meta xport ; do gluster volume set ${i} group virt ; done
for i in data engine meta xport ; do gluster volume set ${i} storage.owner-uid 
36 ; gluster volume set ${i} storage.owner-gid 36 ; done
for i in data engine meta xport ; do gluster volume start ${i} ; done

-Patrick

On Apr 8, 2015, at 8:32 PM, Bill Dossett 
bill.doss...@pb.commailto:bill.doss...@pb.com wrote:

Hi,

Kind of a stupid question, but I couldn’t figure it out today.

I setup a gluster cluster with ovirt-engine.  Created the volume with a couple 
of bricks in a replicating cluster.

I will eventually be using some of this anyway with ovirt Vms, but also 
interested in using the glusterfs and cifs and nfs and hopefull iSCSI block to 
other devices…  but I could find out what the export of the glusterfs   
server:/path   was.  Is this something I have to setup in addition to the 
gluster cluster.  I thought I had read something that you need to set it up, 
but then couldn’t find it with all the different documents and windows I had 
open and it was late.  So hoping someone can point me in the right direction 
for this.

Thank you.

Bill



___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automating oVirt Windows Guest Tools installations

2015-06-17 Thread Patrick Russell
Thank you Lev for the clarification. We had been installing manually via the 
ISO, but I had mis-read some other articles about using python to automate the 
process.

I will pass on the notes around /S and your article to our internal windows 
team. Maybe they have some ideas around the cert store, or at the very least 
pass off the manual instructions to our NOC.

I appreciate the response,
Patrick

On Jun 17, 2015, at 2:55 PM, Lev Veyde 
lve...@gmail.commailto:lve...@gmail.com wrote:

Hi Patrick,

First of all lets clear some misunderstanding here - you don't need to manually 
install Python.
The installation of oVirt WGT is fully self contained, and while the oVirt 
Guest Agent it includes is indeed programmed in Python, the version included is 
converted using py2exe (check py2exe.orghttp://py2exe.org/ for more details 
if it interests you) into a standalone executable (well, almost - just like 
Windows version of Python.exe, it depends on Microsoft Visual Studio CRTL, but 
we install it during the installation of the oVirt WGT).

Now about the automated installation. Generally we support silent installation 
of oVirt WGT.
You just need to supply /S command parameter to the installer.
However there is a catch - unfortunately Windows will popup warning messages 
due to the fact that the drivers supplied are non-WHQL'd. That is because the 
drivers are signed by Red Hat, Inc. and not by Microsoft certificate.

This is a security feature of Windows OS itself, and there is not much we can 
do about it right now.
The side effect of this is that you need to manually approve the drivers 
installation for each driver, or choose to trust all drivers from Red Hat, 
Inc., and then no more popups will show up. Unfortunately, you still need to do 
this manually at least once, and you can't pre-approve Red Hat, Inc. to make 
this process automated. For more information on installing oVirt WGT you can 
check this article: 
http://community.redhat.com/blog/2015/05/how-to-install-and-use-ovirts-windows-guest-tools/
 by yours truly.

There is a workaround though, and it's to create a program that will 
automatically approve such unsigned drivers dialogs. It's relatively easy to 
program with i.e. AutoIt scripting engine (check: 
https://www.autoitscript.com/site/autoit/ ), which is free (like in free beer, 
but unfortunately not as in freedom because source code for it is not 
supplied). Note that you must be quite careful with that, as by doing so you 
basically disabling the security mechanism that Microsoft had put in place for 
a reason, and potentially you may unintentionally install other non-WHQL'd 
drivers - if the installation attempt for these other drivers will be made 
while your auto-approver program will run.

Thanks in advance,
Lev Veyde.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] windows guest tools install

2015-06-17 Thread Patrick Russell
Hi all,

We’ve got a large migration in progress for a windows (2k3, 2k8, and 2k12) 
environment from vmware to ovirt. Does anyone have any suggestions for an 
unattended ovirt-tools install? Our windows team has pretty much shot down 
installing python on their VM’s. Are there any flags we can pass to the 
installer to just accept the defaults? Any other suggestions?

Thanks,
Patrick

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dell R720 iDRAC7 config IPMI

2015-06-01 Thread Patrick Russell
Nicolas,

We have newer Dell working with the following setting:

type: drac5
slot:
options: cmd_prompt=
secure: checked


Works fine for us. Even on the new Dell FC630’s this configuration is working.

-Patrick

 On Jun 1, 2015, at 2:12 PM, Nicolas Ecarnot nico...@ecarnot.net wrote:
 
 Le 01/06/2015 20:44, Juan Carlos YJ. Lin a écrit :
 Need help to configure Dell R720 IPMI, tested with drac5, drac7 but all
 result in unknow status
 
 Options : lanplus=1
 
 -- 
 Nicolas Ecarnot
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Benefits of Cluster separation

2015-08-18 Thread Patrick Russell
Can I ask at what scale you’re running into issues? We’ve got about 500 VM’s 
running now in a single cluster.

-Patrick

On Aug 18, 2015, at 4:03 PM, Matthew Lagoe 
matthew.la...@subrigo.netmailto:matthew.la...@subrigo.net wrote:

You can have different cluster policy’s at least, don’t know what other 
benefits there are however as I haven’t noticed any.

From: users-boun...@ovirt.orgmailto:users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of Groten, Ryan
Sent: Tuesday, August 18, 2015 01:59 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: [ovirt-users] Benefits of Cluster separation

We’re running into some performance problems stemming from having too many 
Hosts/VMs/Disks running from the same Datacenter/Cluster.  Because of that I’m 
looking into splitting the DC into multiple separate ones with different 
Hosts/Storage.
But I’m a little confused what the benefit of separating hosts into clusters 
achieves.  Can someone please explain what the common use cases are?  Since all 
the clusters in a DC seem to need to see the same storage, I don’t think it 
would help my situation anyway.

Thanks,
Ryan
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] virt-v2v

2015-07-31 Thread Patrick Russell
We didn’t use the iso at all. If you have vcenter, try something like this 
(note using vpx, etc):

virt-v2v -ic 
vpx://username@$vcenter_hostname/$DataCenterName/$ClusterName/$esxiHostName?no_verify=1
 $VMName -o rhev -os $EXPORT_DOMAIN --bridge $NetworkNameinOvirt

Here’s our versions, using centos 7:

virt-v2v-1.28.1-1.28.el7.x86_64
qemu-kvm-common-rhev-2.1.2-13.el7.x86_64
qemu-img-rhev-2.1.2-13.el7.x86_64
qemu-kvm-rhev-2.1.2-13.el7.x86_64

-Patrick


On Jul 31, 2015, at 9:17 AM, Will K 
yetanotherw...@yahoo.commailto:yetanotherw...@yahoo.com wrote:

Thank you guys.  It is good to hear that it works well.  I'll check out the 
URLs provided.

- My source is a ESXi 5.5 setup with vcenter.
- I think I'll try to the ISO option

The problem seems to be with SASL.  Even if I give it a fake vcenter hostname 
or non-existing virtual machine name, it still gives me the same error with 
SASL(-7)

I have ...
virt-v2v-0.9.1-5.el6_5.x86_64
qemu-kvm-0.12.1.2-2.415.el6_5.3.x86_64
qemu-img-0.12.1.2-2.415.el6_5.3.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6_5.3.x86_64

Will




On Friday, July 31, 2015 7:35 AM, Dan Kenigsberg 
dan...@redhat.commailto:dan...@redhat.com wrote:


On Thu, Jul 30, 2015 at 09:57:46PM +, Will K wrote:

 Hi

 I have a project to convert VMs on VMware ESX 5.5 and some VM on standalone 
 KVM server to oVirt.  I started to look into virt-v2v.  I wonder if I'm 
 hitting the right list.  Please let me know if this list doesn't cover 
 virt-v2v.
 Issue:when I run the following command on one of two hosts running oVirt 
 3.3.3-2-el6
 virt-v2v  -ic esx://esxserver1/?no_verify=1 -os GFS1 virtmachine1
 I got:virt-v2v: Failed to connect to qemu:///system: libvirt error code: 45, 
 message: authentication failed: Failed to step SASL negotiation: -7 
 (SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)
 I already added a .netrc which 600, correct oVirt login and password. I also 
 ran saslpasswd2 as root already.
 Thanks
 Will


This, too, does not answer your question, but please be aware of a
http://www.ovirt.org/Features/virt-v2v_Integration coming up in
ovirt-3.6.



___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage network question

2015-07-30 Thread Patrick Russell
We just changed this up a little this week. We split our traffic into 2 bonds, 
10GB mode 1 as follows:

Guest vlans, managment vlan (including some NFS storage) - bond0
Migration layer 2 only vlan - bond1

This allowed us to tweak the vdsm.conf to speed up migrations without impacting 
management and guest traffic. As a result we’re currently pushing about 5Gb on 
bond1 when we do live migrations between hosts.

-Patrick

 On Jul 28, 2015, at 1:34 AM, Alan Murrell li...@murrell.ca wrote:
 
 Hi Patrick,
 
 On 27/07/2015 7:25 AM, Patrick Russell wrote:
 We currently have all our nics in the same bond. So we have guest
 traffic, management,  and storage running over the same physical
 nics, but different vlans.
 
 Which bond mode do you use, out of curiousity?  Not sure I would go to this 
 extreme, though; I would still want the physical isolation of Management vs. 
 network/VM traffic vs. storage, but just curious which bonding mode?
 
 Modes 1 and 5 would seem to be the best ones, as far as maximising 
 throughput.  I read an article just the other day where a guy detailed how he 
 bonded four 1Gbit NICs in mode 1 (with each on a different VLAN) and was able 
 to achieve 320MB/s throughput to NFS storage.
 
 As far as the storage question, I like to put other storage on the network 
 (smaller NAS devices, maybe SANs for other storage) and would want the VMs to 
 be bale to get at those.  Being to use a NIC to carry VM traffic for storage 
 as well as for host access to storage would cut down on the number of NICs I 
 would need to have in each node.
 
 -Alan
 
 
 -Alan
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] virt-v2v

2015-07-31 Thread Patrick Russell
Will,

Is this esxi or esx and vcenter? If you have regular esx you need to connect 
through vcenter first. We’ve migrated over 500 (windows and linux mix) VM’s 
using virt-v2v. Some of the windows VM’s did require us to use a testing repo 
for libvirt on our V2V box.

-Patrick

On Jul 30, 2015, at 5:31 PM, Groten, Ryan 
ryan.gro...@stantec.commailto:ryan.gro...@stantec.com wrote:

This doesn’t answer your question directly, but I never had any luck using 
virt-v2v from VMWare.  I found it worked well to treat the VMWare VM just like 
a physical server, boot it from the virt-v2v iso and convert the VMWare VM that 
way.

From: users-boun...@ovirt.orgmailto:users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of Will K
Sent: Thursday, July 30, 2015 3:58 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: [ovirt-users] virt-v2v

Hi

I have a project to convert VMs on VMware ESX 5.5 and some VM on standalone KVM 
server to oVirt.  I started to look into virt-v2v.  I wonder if I'm hitting the 
right list.  Please let me know if this list doesn't cover virt-v2v.

Issue:
when I run the following command on one of two hosts running oVirt 3.3.3-2-el6
virt-v2v  -ic esx://esxserver1/?no_verify=1 -os GFS1 virtmachine1

I got:
virt-v2v: Failed to connect to qemu:///system: libvirt error code: 45, message: 
authentication failed: Failed to step SASL negotiation: -7 (SASL(-7): invalid 
parameter supplied: Unexpectedly missing a prompt result)

I already added a .netrc which 600, correct oVirt login and password. I also 
ran saslpasswd2 as root already.

Thanks

Will


___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage network question

2015-07-27 Thread Patrick Russell
Alan,

We currently have all our nics in the same bond. So we have guest traffic, 
management,  and storage running over the same physical nics, but different 
vlans. 

Hope this helps,
Patrick

 On Jul 26, 2015, at 4:38 AM, Alan Murrell li...@murrell.ca wrote:
 
 If I am using a NIC on my host on the storage network to access storage
 (NFS, iSCSI, Gluster, etc.), is there any issue with allowing VMs to be
 assigned to it so they can access the same storage network?
 
 (the VMs would have a NIC added specifically for this, of course)
 
 Basically, unlike in VMware and Hyper-V, in oVirt can the same NIC be
 used for host and VM access to the storage network?
 
 Thanks! :-)
 
 -Alan
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Servers Hang at 100% CPU On Migration

2015-08-24 Thread Patrick Russell
We had this exact issue on that same build. Upgrading to oVirt Node - 3.5 - 
0.999.201507082312.el7.centos made the issue disappear for us. It was one of 
the 3.5.3 builds.

Hope this helps.

-Patrick

 On Aug 19, 2015, at 1:15 PM, Chris Jones - BookIt.com Systems Administrator 
 chris.jo...@bookit.com wrote:
 
 oVirt Node - 3.5 - 0.999.201504280931.el7.centos
 
 When migrating servers using an iSCSI storage domain, about 75% of the time 
 they will become unresponsive and stuck at 100% CPU after migration. This 
 does not happen with direct LUNs, however.
 
 What causes this? How do I stop it from happening?
 
 Thanks
 
 -- 
 This email was Virus checked by UTM 9. For issues please contact the Windows 
 Systems Admin.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] schedule a VM backup in ovirt 3.5

2015-10-23 Thread Patrick Russell
Using ovirt-node EL7 we’ve been able to live merge since 3.5.3 without any 
issues. 

-Patrick

> On Oct 23, 2015, at 12:24 AM, Christopher Cox  wrote:
> 
> On 10/22/2015 10:46 PM, Indunil Jayasooriya wrote:
> ...
>> 
>> Hmm,
>> 
>> *How to list the sanphot?
>> *
>> *how to backup the VM with snapshot?
>> *
>> *finally , how to remove this snapshot?
>> *
>> 
>> 
>> Then. I think it will be OVER. Yesterday, I tried a lot.  but, NO success.
>> 
>> Hope to hear from you.
> 
> Not exactly "help" but AFAIK, even with 3.5, there is no live merging of 
> snapshots so they can't be deleted unless the VM is down.  I know for large 
> snapshots that have been around for awhile removing them can take some time 
> too.
> 
> Others feel free to chime in...
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] integrate iSCSI and FC on the same oVirt datacenter

2015-10-19 Thread Patrick Russell
We use FCoE in our setup. All the configs are in /etc/fcoe/ and fcoeadm is part 
of the ovirt-node iso. So this should work the same as setting up FCoE on 
CentOS or RHEL.

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/fcoe-config.html

-Patrick

On Oct 19, 2015, at 2:34 AM, Kapetanakis Giannis 
> wrote:

On 18/10/15 16:44, Nir Soffer wrote:
I still don't understand the problem you are trying to solve.

Can you explain the network topology?

- Do you have FC storage server, switch?
- How many nodes to do you have with FC HBA?

Do you want to add nodes without FC HBA, and you want to consume
the FC storage?

Or you want to add nodes with FC HBA, but you don't have an FC switch?

Nir


I have FC storage, FC switch and and with FC HBAs.

I want to add new nodes to the FC storage and to the ovirt datacenter without 
using FC directly (no HBA, no switch).

G
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node

2015-10-20 Thread Patrick Russell
We had this same issue after 3.5.2 and I forget the reason why the official 
build hasn’t been released, but we were pointed in the direction of using the 
nightly build from Jenkins:

http://jenkins.ovirt.org/job/ovirt-node_ovirt-3.5_create-iso-el7_merged/

I can vouch that ovirt-node-iso-3.5-0.999.201510062311.el7.centos is stable.

-Patrick

On Oct 20, 2015, at 4:53 AM, Massimo Mad 
> wrote:

Hi,
I want to migrate my hosts from centos centoes 6.x to 7.x and at the same time 
move from a normal installation centos with then installed oVirt to oVirt node.
But I noticed that there is not always the latest release of oVirt node, for 
example, the latest version of oVirt node is 3.5.2 and the latest version of 
oVirt is 3.5.4.
Where can I find the latest release of oVirt node, or there is a kickstart to 
create the hosts.
Regards
Massimo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] integrate iSCSI and FC on the same oVirt datacenter

2015-10-20 Thread Patrick Russell
You need a NIC that supports it. We are currently using the Broadcom 
Corporation BCM57840 NetXtreme II 10 Gigabit Ethernet adapter. However, 
multiple vendors sell a converged network adapter that should work.

You’ll also need to verify that your switch supports FCoE. This is where you’ll 
connect up to your SAN fabric.

If you’re in the market for new compute blades as well, you can go the route we 
did. The Dell FX2 platform has a couple of 4 port FCoE and Network switch in 
the back of each chassis so you can cable up directly to the fabric without 
changing your top of rack switches to FCoE. Here’s some more information:

http://www.dell.com/us/business/p/poweredge-fx/pd

Specifically the FN2210S IO Module for that chassis:

http://www.dell.com/us/business/p/fn-io-aggregator/pd


-Patrick

On Oct 20, 2015, at 7:23 AM, Kapetanakis Giannis 
<bil...@edu.physics.uoc.gr<mailto:bil...@edu.physics.uoc.gr>> wrote:

On 19/10/15 18:14, Patrick Russell wrote:
We use FCoE in our setup. All the configs are in /etc/fcoe/ and fcoeadm is part 
of the ovirt-node iso. So this should work the same as setting up FCoE on 
CentOS or RHEL.

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/fcoe-config.html

-Patrick


Do you use special hardware (switch or nics) for FCoE?

G

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 3.5.3 node iso

2015-07-09 Thread Patrick Russell
Any chance that 3.5.3 EL7 node will get a release soon?

-Patrick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.5.3 node iso

2015-07-09 Thread Patrick Russell
Great, thanks Fabian!

-Patrick

 On Jul 9, 2015, at 1:28 PM, Fabian Deutsch fdeut...@redhat.com wrote:
 
 - Original Message -
 Any chance that 3.5.3 EL7 node will get a release soon?
 
 Hey Patrick,
 
 currently we are not publishing official isos to resources.ovirt.org, 
 instead please
 use the ISOs build in our CI:
 
 http://jenkins.ovirt.org/job/ovirt-node_ovirt-3.5_create-iso-el7_merged/
 
 This build is building nightly builds (to pickup the latest CentOS 7 
 packages),
 with the latest stable oVirt packages (from 3.5.3 currently).
 
 - fabian
 
 P.s.: I just double-checked that the build really contains the latest updates
 (ovirt-node-3.2.3-11.el7.centos.noarch)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is a dedicated oVirt mgmt VLAN still needed for oVirt host nodes?

2015-09-15 Thread Patrick Russell
It will all work using different VLAN tags on the same physical nics. At least 
in 3.5.x that’s the case, we don’t have a 3.4.x install so I can’t speak to 
that. You’ll want to watch your NFS and migration traffic though. Make sure you 
don’t overrun the bandwidth for management traffic or you’re going to have a 
bad day.

-Patrick
 
> On Sep 14, 2015, at 10:08 AM, c...@endlessnow.com wrote:
> 
> We have an oVirt environment that I inherited.  One is running 3.4.0 and
> one is running 3.5.0.
> 
> Seem in both cases the prior administrator stated that a dedicated VLAN
> was necessary for oVirt mgmt.  That is, we could not run multiple tagged
> VLANs on a nic for a given oVirt host node.
> 
> Does any of this make sense?  Is this true?  Is it still true for more
> contemporary versions of oVirt?
> 
> My problem is that our nodes are blades and I only have two physical nics
> per blade.  In our network for redundancy we need to have the two nics
> have the same VLANs so that things failover ok.  Which means we have to
> share the oVirt mgmt network on the same wire.  That's the ideal.
> 
> Currently we have a whole nic on the blade just for oVirt management.  Is
> this a requirement?
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine

2016-01-06 Thread Patrick Russell
Put the host it’s on into maintenance mode in the GUI. It will migrate to 
another HE host automatically.


-Patrick

From: > on behalf of 
Budur Nagaraju >
Date: Wednesday, January 6, 2016 at 4:20 PM
To: Maor Lipchuk >
Cc: users >
Subject: Re: [ovirt-users] Hosted Engine


Yes HE VM is available in GUI but unable to migrate , is there a way to resolve 
the issue?

On Jan 6, 2016 8:23 PM, "Maor Lipchuk" 
> wrote:
Basically it your HE VM appears in the GUI you can live migrate it to another 
Host in the same cluster.
Roy, please correct me if I'm wrong.

Regards,
Maoir



- Original Message -
> From: "Budur Nagaraju" >
> To: "users" >
> Sent: Wednesday, January 6, 2016 11:23:48 AM
> Subject: [ovirt-users] Hosted Engine
>
> can we migrate a Engine which is pushed by Hosted Engine to the other oVirt
> node ? which is in same cluster ?
>
> Thanks,
> Nagaraju
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] import ovirt vms to esx

2016-01-07 Thread Patrick Russell
You could try vmware converter, but that’s probably a better question for 
vmware.

-Patrick

From: > on behalf of 
alireza sadeh seighalan >
Date: Thursday, January 7, 2016 at 2:37 PM
To: "users@ovirt.org" 
>
Subject: [ovirt-users] import ovirt vms to esx

hi friends

is there any solution for  import  ovirt vms to  vmware  esx?  thanks inadvance
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Woes

2016-06-16 Thread Patrick Russell
We had some funkiness with hosted-engine and the steps Simone suggested is
essentially what we went thru to get it all back to normal. Just remember
to be patient, it seems the agent can take some time to poll all the hosts.

-Patrick

On Wed, Jun 15, 2016 at 4:12 PM, Nic Seltzer <nselt...@riotgames.com> wrote:

> Has anyone else experienced a similar issue? Is the advised action to
> reboot the hosted-engine? I defer the to the expertise in this mailing list
> so that I might help others.
>
> Thanks,
>
> On Tue, Jun 14, 2016 at 3:11 PM, Simone Tiraboschi <stira...@redhat.com>
> wrote:
>
>> On Tue, Jun 14, 2016 at 8:45 PM, Nic Seltzer <nselt...@riotgames.com>
>> wrote:
>> > Hello!
>> >
>> > I'm looking for someone who can help me out with a hosted-engine setup
>> that
>> > I have. I experienced a power event a couple of weeks ago. Initially,
>> things
>> > seemed to have come back fine, but the other day, I noticed that one of
>> the
>> > nodes for the cluster was down. I tried to drop it into maintenance mode
>> > (which never completed) and reboot it then "Confirm the Host has been
>> > rebooted". Neither of these steps allowed the host to re-enter the
>> cluster.
>> > Has anyone encountered this? At this point, I'd like to reboot the
>> > hosted-engine, but I can't find documentation instructing me on "how".
>> I'm
>>
>> hosted-engine --set-maintenance --mode=global
>> hosted-engine --vm-shutdown
>> hosted-engine --vm-status # poll till the VM is down
>> hosted-engine --vm-start
>> hosted-engine --set-maintenance --mode=none
>>
>> > also open to other suggestions or references to documentation that will
>> help
>> > triage my issue.
>> >
>> > Thanks!
>> >
>> >
>> >
>> > nic
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>>
>
>
>
> --
> Nic Seltzer
> Esports Ops Tech | Riot Games
> Cell: +1.402.431.2642 | NA Summoner: Riot Dankeboop
> http://www.riotgames.com
> http://www.leagueoflegends.com
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Patrick Russell | Manager, Private Cloud Infrastructure
512.605.2378 |  patrick.russ...@volusion.com
www.volusion.com | www.material.com

Volusion, Inc. | More successful businesses are built here.

This email and any attached files are intended solely for the use of the
individual(s) or entity(ies) to whom they are addressed, and may contain
confidential information. If you have received this email in error, please
notify me immediately by responding to this email and do not forward or
otherwise distribute or copy this email.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host maintenance - VM migration logic

2016-02-04 Thread Patrick Russell
Is there any way migrate VM’s more evenly across the cluster when a host is 
being placed into maintenance? Currently it attempts to auto migrate all the 
VM’s to another single host and then balance out. When the destination host is 
more than 50% memory utilized this has caused over subscription problems. Some 
of our more heavily used hosts ending up using all the memory and stop 
communicating with engine. If it’s not possible, how are other teams handling 
this? Manual migrations before maintenance mode?

Thanks,
Patrick

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users