Re: [Users] too much debugging in ovirt-node

2013-07-01 Thread Martin Kletzander
On 06/28/2013 12:49 PM, Matt . wrote:
 I have also libvirt logs with about 10GB of space, annoying, needs to be
 fixed.
 

Please don't top-post on technical lists.

However, same reply applies to you.  Check your log_level and
log_filters settings.  I just installed fresh server and looking at the
libvirtd.conf, VDSM properly configured it to have also:

log_filters=1:libvirt 3:event 3:json 1:util 1:qemu

where 3:event 3:json gets rid of a lot unnecessary messages (but
beware, it depends on the order in which those directives are
specified).  You can also have a look at what we discussed about in Bug
920614 [1], where you might find interesting settings for that as well.
 There is also link to the Gerrit tracker, where the issue is tracked
[2] as well.

Logs should also be rotated.  In case there is no logrotate daemon or no
rules for it, that may be another point of failure.

Martin

[1] https://bugzilla.redhat.com/920614
[2] http://gerrit.ovirt.org/#/c/13642/

 
 2013/6/28 Martin Kletzander mklet...@redhat.com
 
 On 06/26/2013 07:47 PM, Winfried de Heiden wrote:
 Hi all,

 Using ovirt-node-iso-2.6.1-20120228.fc18.iso (2012 seems to be a typo,
 must be 2013?) is logging with too much debugging.

 Changing all theDEBUG to WARNOMG in /etc/vdsm/logger.conf and
 persist /etc/vdsm/logger.conf solved it for /var/log/vdsm/vdsm.log.

 However, /var/log/libvirtd.log also shows tons of debug messages. The
 file /etc/libvirt/libvirtd.conf shows:

 listen_addr=0.0.0.0
 unix_sock_group=kvm
 unix_sock_rw_perms=0770
 auth_unix_rw=sasl
 host_uuid=06304eff-1c91-4e1e-86e2-d773621dcab3
 log_outputs=1:file:/var/log/libvirtd.log
 ca_file=/etc/pki/vdsm/certs/cacert.pem
 cert_file=/etc/pki/vdsm/certs/vdsmcert.pem
 key_file=/etc/pki/vdsm/keys/vdsmkey.pem

 Changing log_outputs=1:file:/var/log/libvirtd.log to
 log_outputs=3:file:/var/log/libvirtd.log

 with persist (or unpersist first, thn persist) doesn't help. After a
 reboot log_outputs=1:file:/var/log/libvirtd.log will appear again.

 How to decrease the log level for libvirtd?


 Check also log_level and log_filter properties.  You can cut down the
 logs a lot, but beware that in case something happens...  You probably
 know.  The advantage with using something on top of libvirt (oVirt for
 example) is that you will be probably able to reproduce libvirt problems
 in case there will be any and then you can switch the debugging back.

 For full options on logging, see http://libvirt.org/logging.html

 Martin
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Using python sdk to find the VMs running on a host.

2013-07-01 Thread Deepthi Dharwar
On 06/25/2013 04:04 PM, Michael Pasternak wrote:
 On 06/25/2013 09:28 AM, Itamar Heim wrote:
 On 06/25/2013 07:54 AM, Deepthi Dharwar wrote:
 Hi,

 I am using the ovirt-python-sdk to figure out if the host is idle or
 not. The way to determine if the host is idle at a given instant is
 finding out the number of VMs running on it. If the number of VMs = 0
 implies host is idle.

 adding michael for the rest of the question, but please note to check for 
 SPM role of a host before assuming it is idle.

 also, may i ask what you are trying to accomplish (it sounds like a power 
 saving policy)?


 I was exploring the python sdk to figure out Host-VMs mapping i.e What
 are the VMs running on the different host.

 Looks like the only way to find this, is to query each VM in /api/vms
 list to get the host on which it is running.

 Is this the right way ? Is there no direct query or REST API to list
 the VMs running on a given host at that instant.
 
 you can run query=host = X for that.
 

Thanks! I was able to successfully come up with the hierarchy.


 I was looking to get the data center hierarchy structure.
 Number_of_datacenters
|
V
 clusters in each data center
|
V
 Hosts in each cluster
|
V
   VMs on each host.

 This kind of mapping as seen on the GUI. Is there any way to obtain
 the same from the ovirt-python-sdk ?
 
 you can combine query to fetch vms by dc+cluster+host,
 
 query = datacenter = x and cluster = y and host = z
 
 also you can fetch hosts from the given cluster
 
 cluster=x
 
 same for cluster
 
 Datacenter.name = x
 

 With this information, this would help me write scripts to turn-off my
 hosts if idle automatically and power them on as required.

 Regards,
 Deepthi



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Putting the host in maintenance mode via python SDK

2013-07-01 Thread Deepthi Dharwar
Hi,

I am trying to switch off Hosts in DC which are idle.

I wanted to know as to why start/stop of hosts options are enabled in
Power Mgmt tab only when the host is in Maintenance mode and not otherwise.

Also, is there any way via pythonSDK to put the host in maintenance mode ?

Regards,
Deepthi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Putting the host in maintenance mode via python SDK

2013-07-01 Thread Vladimir Vyazmin
Hi,

There is any a via pythonSDK to put the host in maintenance mode:

hostObject = API.hosts.get(fakeHostNames)

turboMaintenanceHost(hostObject)

# Turbo Maintenance Host #
def turboMaintenanceHost(vmObject):
 maintenance Host
if vmObject.status.state == 'up':
vmObject.deactivate()
LOGGER.info(TimestampScale() + Object '%s' maintenance. % 
vmObject.name)
else:
LOGGER.warning(TimestampScale() + Failed maintenance '%s' Object. % 
vmObject.name)
return True





- Original Message -
 From: Deepthi Dharwar deep...@linux.vnet.ibm.com
 To: users@ovirt.org, Michael Pasternak mpast...@redhat.com
 Sent: Monday, July 1, 2013 2:13:09 PM
 Subject: [Users] Putting the host in maintenance mode via python SDK
 
 Hi,
 
 I am trying to switch off Hosts in DC which are idle.
 
 I wanted to know as to why start/stop of hosts options are enabled in
 Power Mgmt tab only when the host is in Maintenance mode and not otherwise.
 
 Also, is there any way via pythonSDK to put the host in maintenance mode ?
 
 Regards,
 Deepthi
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

-- 

Have a nice day ☺☻☺☻☺!
Thank you and best regards

Vladimir Vyazmin
Mobile:  +972 (0) 50 277 2707
Phone:   +972 (0) 9 769 2346
Email:   vvyaz...@redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2.2 successfully connected to Samba4

2013-07-01 Thread Juan Jose
Hello everybody,

Thanks Gianluca for share your experience. I have now installed and
configured a Samba 4.0.6 over Debian 7 Stable distro and I'm in the step of
importing all my users from my production OpenLDAP + Samba 3 server to this
new server which it's now working. After that I want join it to my oVirt
engine. I will share too my experience when I have the system all working.

Thanks again,

Juanjo.


On Fri, Jun 28, 2013 at 4:44 PM, Charlie medieval...@gmail.com wrote:

 Excellent, Gianluca, thanks for sharing the information!
 --Charlie


 On Fri, Jun 28, 2013 at 10:19 AM, Gianluca Cecchi 
 gianluca.cec...@gmail.com wrote:

 Hello,
 in the past there were some threads related to this subject.
 Today I successfully connected my oVirt 3.2.2 (installed on f18 with
 ovirt-repo) to a CentOS 6 samba4 server.

 Basically I followed this nice page for CentOS 6 with the difference
 that I downloaded and compiled 4.0.6 version of Samba instead of
 4.0.0:

 http://opentodo.net/2013/01/samba4-as-ad-domain-controller-on-centos-6/

 One important thing is that I had to put samba4 server ip in
 resolv.conf as the first for my engine.
 But in my case this was not a problem because samba4 is then
 configured with the original corporate dns as forwarder, so all is ok
 for me

 Some commands' output

 [root@c6dc samba-4.0.6]# /usr/local/samba/bin/samba-tool domain
 provision --realm=ovtest.local --domain=OVTEST --adminpass 'X'
 --server-role=dc --dns-backend=BIND9_DLZ
 Looking up IPv4 addresses
 Looking up IPv6 addresses
 No IPv6 address will be assigned
 Setting up secrets.ldb
 Setting up the registry
 Setting up the privileges database
 Setting up idmap db
 Setting up SAM db
 Setting up sam.ldb partitions and settings
 Setting up sam.ldb rootDSE
 Pre-loading the Samba 4 and AD schema
 Adding DomainDN: DC=ovtest,DC=local
 Adding configuration container
 Setting up sam.ldb schema
 Setting up sam.ldb configuration data
 Setting up display specifiers
 Modifying display specifiers
 Adding users container
 Modifying users container
 Adding computers container
 Modifying computers container
 Setting up sam.ldb data
 Setting up well known security principals
 Setting up sam.ldb users and groups
 Setting up self join
 Adding DNS accounts
 Creating CN=MicrosoftDNS,CN=System,DC=ovtest,DC=local
 Creating DomainDnsZones and ForestDnsZones partitions
 Populating DomainDnsZones and ForestDnsZones partitions
 See /usr/local/samba/private/named.conf for an example configuration
 include file for BIND
 and /usr/local/samba/private/named.txt for further documentation
 required for secure DNS updates
 Setting up sam.ldb rootDSE marking as synchronized
 Fixing provision GUIDs
 A Kerberos configuration suitable for Samba 4 has been generated at
 /usr/local/samba/private/krb5.conf
 Once the above files are installed, your Samba4 server will be ready to
 use
 Server Role:   active directory domain controller
 Hostname:  c6dc
 NetBIOS Domain:OVTEST
 DNS Domain:ovtest.local
 DOMAIN SID:S-1-5-21-4186344073-955232896-1764362378


 [root@c6dc samba-4.0.6]# rndc-confgen -a -r /dev/urandom
 wrote key file /etc/rndc.key


 - tests
 (see also
 http://www.alexwyn.com/computer-tips/centos-samba4-active-directory-domain-controller
 )

 [root@c6dc ]# /usr/local/samba/bin/smbclient -L localhost -U%
 Domain=[OVTEST] OS=[Unix] Server=[Samba 4.0.6]

 Sharename   Type  Comment
 -     ---
 netlogonDisk
 sysvol  Disk
 IPC$IPC   IPC Service (Samba 4.0.6)
 Domain=[OVTEST] OS=[Unix] Server=[Samba 4.0.6]

 Server   Comment
 ----

 WorkgroupMaster
 ----

 [root@c6dc ntp-4.2.6p5]# host -t SRV _ldap._tcp.ovtest.local.
 _ldap._tcp.ovtest.local has SRV record 0 100 389 c6dc.ovtest.local.

 [root@c6dc ntp-4.2.6p5]# host -t SRV _kerberos._udp.ovtest.local.
 _kerberos._udp.ovtest.local has SRV record 0 100 88 c6dc.ovtest.local.


 [root@c6dc ntp-4.2.6p5]# kinit administrator@OVTEST.LOCAL
 Password for administrator@OVTEST.LOCAL:
 Warning: Your password will expire in 41 days on Fri Aug  9 13:30:59 2013

 [root@c6dc ntp-4.2.6p5]# klist
 Ticket cache: FILE:/tmp/krb5cc_0
 Default principal: administrator@OVTEST.LOCAL

 Valid starting ExpiresService principal
 06/28/13 14:55:11  06/29/13 00:55:11  krbtgt/OVTEST.LOCAL@OVTEST.LOCAL
 renew until 07/05/13 14:55:08

 Users' mgmt can be done from windows with Samba AD management tools
 see: http://wiki.samba.org/index.php/Samba_AD_management_from_windows

 I managed from linux
 see: http://wiki.samba.org/index.php/Adding_users_with_samba_tool

 [root@c6dc ntp-4.2.6p5]# /usr/local/samba/bin/samba-tool user add
 OVIRTADM
 New Password:
 Retype Password:
 User 'OVIRTADM' created successfully

 [root@c6dc ntp-4.2.6p5]# /usr/local/samba/bin/wbinfo --name-to-sid
 OVIRTADM
 S-1-5-21-4186344073-955232896-1764362378-1104 

Re: [Users] Bonding - VMs Network performance problem

2013-07-01 Thread Ricardo Esteves
Hi,

Yes, i'm still experiencing this problem, in fact just happened a few
minutes ago. :)

All MTUs are 1500.

-Original Message-
From: Livnat Peer lp...@redhat.com
To: Ricardo Esteves maverick...@gmail.com
Subject: Re: [Users] Bonding - VMs Network performance problem
Date: Sun, 23 Jun 2013 11:33:58 +0300


Hi Ricardo,
Are you still experiencing the problem described below?
Are you configuring MTU (to something other than default or 1500) for
one of the networks on the bond?

Thanks, Livnat

On 06/18/2013 05:36 PM, Ricardo Esteves wrote:
 Good afternoon,
 
 Yes, the Save network configuration is checked, configurations are
 persistent across boots.
 
 The problem is not the persistence of the configurations, the problem is
 that after a reboot the network performance on the VMs is very bad, and
 to fix it i need to remove the bonding and add it again.
 
 In attachment, the screenshots of my network configuration.
 
 Best regards,
 Ricardo Esteves.
 
 -Original Message-
 *From*: Mike Kolesnik mkole...@redhat.com
 mailto:mike%20kolesnik%20%3cmkole...@redhat.com%3e
 *To*: Ricardo Esteves maverick...@gmail.com
 mailto:ricardo%20esteves%20%3cmaverick...@gmail.com%3e
 *Cc*: Users@ovirt.org mailto:Users@ovirt.org
 *Subject*: Re: [Users] Bonding - VMs Network performance problem
 *Date*: Sun, 26 May 2013 04:57:43 -0400 (EDT)
 
 
 
 
 
 
 Hi,
 
 I've got ovirt installed on 2 HP BL460c G6 blades, and my VMs have
 very poor network performance (around 7,01K/s).
 
 On the servers itselfs there is no problem, i can download a file
 with wget at around 99 M/s.
 
 Then i go to ovirt network configuration remove the bonding and then
 make the bonding again and the problem gets fixed (i have to do this
 everytime i reboot my blades).
 
 Have you tried to check the Save network configuration check box, or
 clicking the button from the host's NICs  sub-tab?
 
 This should persist the configuration that you set on the host across
 reboots..
 
 
 SERVER' s Software:
 CentOS 6.4 (64 bits) - 2.6.32-358.6.2.el6.x86_64
 Ovirt EL6 official rpms.
 
 Anyone experienced this kind of problems?
 
 Best regards,
 Ricardo Esteves.
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

attachment: face-smile.png___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Putting the host in maintenance mode via python SDK

2013-07-01 Thread Michael Pasternak
On 07/01/2013 02:37 PM, Vladimir Vyazmin wrote:
 Hi,
 
 There is any a via pythonSDK to put the host in maintenance mode:
 
 hostObject = API.hosts.get(fakeHostNames)
 
 turboMaintenanceHost(hostObject)
 
 # Turbo Maintenance Host #
 def turboMaintenanceHost(vmObject):
  maintenance Host
 if vmObject.status.state == 'up':
 vmObject.deactivate()
 LOGGER.info(TimestampScale() + Object '%s' maintenance. % 
 vmObject.name)
 else:
 LOGGER.warning(TimestampScale() + Failed maintenance '%s' Object. % 
 vmObject.name)
 return True
 
 
 
 
 
 - Original Message -
 From: Deepthi Dharwar deep...@linux.vnet.ibm.com
 To: users@ovirt.org, Michael Pasternak mpast...@redhat.com
 Sent: Monday, July 1, 2013 2:13:09 PM
 Subject: [Users] Putting the host in maintenance mode via python SDK

 Hi,

 I am trying to switch off Hosts in DC which are idle.

 I wanted to know as to why start/stop of hosts options are enabled in
 Power Mgmt tab only when the host is in Maintenance mode and not otherwise.

cause you don't want to kill all vms running on this host when switching it off,
moving host to the maintenance mode will make them migrating to other host/s in
the cluster,

actually there are much more scenarios where you want to be sure that you can
safely switch off the host.


 Also, is there any way via pythonSDK to put the host in maintenance mode ?

 Regards,
 Deepthi

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 


-- 

Michael Pasternak
RedHat, ENG-Virtualization RD
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] engine-update problem

2013-07-01 Thread Juan Pablo Lorier
Hi,

I finally decided to start from scratch. I've cleaned up the engine and
re run engine setup. I was afraid that restoring the database may lead
to some inconsistencies and I'd rather configure everything.
Thank you all for your help and keep the good work.
Regards,

Juan Pablo Lorier
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] engine-update problem

2013-07-01 Thread Juan Pablo Lorier
Hi again,

It seems it was too rush to say goodby :-).
Everything was ok while resetting all the networks and stuff in the 3.2
environment until it came the time to create the VMs.
I have an ISCSI target with different luns for each VM and when I
created one of them and tried to start it, it failed with this error:


vdsm.log from one of the hosts:

Thread-8267::ERROR::2013-07-01
14:53:58,697::vm::716::vm.Vm::(_startUnderlyingVm)
vmId=`1084e767-1b5e-4903-af0f-c7d90528725b`::The vm start process failed
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 678, in _startUnderlyingVm
self._run()
  File /usr/share/vdsm/libvirtvm.py, line 1546, in _run
self._connection.createXML(domxml, flags),
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py,
line 111, in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 2645, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: internal error process exited while connecting to monitor:
Supported machines are:
pc RHEL 6.4.0 PC (alias of rhel6.4.0)
rhel6.4.0  RHEL 6.4.0 PC (default)
rhel6.3.0  RHEL 6.3.0 PC
rhel6.2.0  RHEL 6.2.0 PC
rhel6.1.0  RHEL 6.1.0 PC
rhel6.0.0  RHEL 6.0.0 PC
rhel5.5.0  RHEL 5.5.0 PC
rhel5.4.4  RHEL 5.4.4 PC
rhel5.4.0  RHEL 5.4.0 PC


engine.log

2013-07-01 14:53:57,503 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-47) [5b2421ce] Running command: RunVmCommand internal:
false. Entities affected :  ID: 1084e767-1b5e-4903-af0f-c7d90528725b
Type: VM
2013-07-01 14:53:57,634 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(pool-3-thread-47) [5b2421ce] START,
ConnectStorageServerVDSCommand(HostName = Dell2, HostId =
d0179516-177f-4925-bdf5-d38c70a8eced, storagePoolId =
5849b030-626e-47cb-ad90-3ce782d831b3, storageType = ISCSI,
connectionList = [{ id: 478b35df-da9a-4cb0-847f-2f61f17a02f7,
connection: 192.168.130.10, iqn: iqn.1992-08.com.netapp:sn.135056614,
vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null,
nfsTimeo: null };{ id: 717e3505-3290-421b-bb00-25e046f63362, connection:
192.168.131.10, iqn: iqn.1992-08.com.netapp:sn.135056614, vfsType: null,
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null
};]), log id: 2a113886
2013-07-01 14:53:57,827 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(pool-3-thread-47) [5b2421ce] FINISH, ConnectStorageServerVDSCommand,
return: {717e3505-3290-421b-bb00-25e046f63362=0,
478b35df-da9a-4cb0-847f-2f61f17a02f7=0}, log id: 2a113886
2013-07-01 14:53:57,862 INFO 
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-47)
[5b2421ce] START, CreateVmVDSCommand(HostName = Dell2, HostId =
d0179516-177f-4925-bdf5-d38c70a8eced,
vmId=1084e767-1b5e-4903-af0f-c7d90528725b, vm=VM [Vftp]), log id: 2c35fafa
2013-07-01 14:53:57,909 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-47) [5b2421ce] START, CreateVDSCommand(HostName = Dell2,
HostId = d0179516-177f-4925-bdf5-d38c70a8eced,
vmId=1084e767-1b5e-4903-af0f-c7d90528725b, vm=VM [Vftp]), log id: 4d43a09f
2013-07-01 14:53:57,983 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-47) [5b2421ce]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
spiceSslCipherSuite=DEFAULT,memSize=2048,kvmEnable=true,smp=2,vmType=kvm,emulatedMachine=pc-0.14,keyboardLayout=en-us,pitReinjection=false,nice=0,display=qxl,smartcardEnable=false,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=-10800,transparentHugePages=true,vmId=1084e767-1b5e-4903-af0f-c7d90528725b,devices=[Ljava.util.Map;@73c32341,acpiEnable=true,vmName=Vftp,cpuType=Westmere,custom={}
2013-07-01 14:53:57,985 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-47) [5b2421ce] FINISH, CreateVDSCommand, log id: 4d43a09f
2013-07-01 14:53:57,999 INFO 
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-47)
[5b2421ce] IncreasePendingVms::CreateVmIncreasing vds Dell2 pending vcpu
count, now 2. Vm: Vftp
2013-07-01 14:53:58,029 INFO 
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-47)
[5b2421ce] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id:
2c35fafa
2013-07-01 14:53:58,041 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-47) [5b2421ce] Lock freed to object EngineLock
[exclusiveLocks= key: 1084e767-1b5e-4903-af0f-c7d90528725b value: VM
, sharedLocks= ]
2013-07-01 14:54:00,257 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-10) START, DestroyVDSCommand(HostName =
Dell2, HostId = d0179516-177f-4925-bdf5-d38c70a8eced,
vmId=1084e767-1b5e-4903-af0f-c7d90528725b, force=false, secondsToWait=0,
gracefully=false), log id: 5c10d8b7
2013-07-01 14:54:00,333 INFO 

Re: [Users] Migration of Windows

2013-07-01 Thread Rick Ingersoll
I'm really struggling with this problem.  I have the virtio 1.59 drivers 
running on the Windows guests.  What else would I need to set to get migration 
of Window guests working?


Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingers...@mjritsolutions.commailto:rick.ingers...@mjritsolutions.com
http://www.mjritsolutions.comhttp://www.mjritsolutions.com/
[mjritsolutions_logo_signature]

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Rick Ingersoll
Sent: Sunday, June 30, 2013 11:30 PM
To: users@ovirt.org
Subject: [Users] Migration of Windows

Hi,
I have an ovirt 3.1 build.  I have 3 hosts and a few virtual 
machines setup that I am using for testing.  I am using gluster storage setup 
as a distribution between the 3 hosts.  I can migrate linux guests across my 3 
hosts, but I cannot migrate windows hosts.  I get Migration failed due to 
Error: Fatal error during migration.  The event id is 65.  Is there something 
additional that needs to be done to windows guests for them to support live 
migration?

Thanks,

Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingers...@mjritsolutions.commailto:rick.ingers...@mjritsolutions.com
http://www.mjritsolutions.comhttp://www.mjritsolutions.com/
[mjritsolutions_logo_signature]

inline: image002.jpginline: image003.jpg___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] attaching glusterfs storage domain -- method glusterServicesGet is not supported

2013-07-01 Thread Steve Dainard
Creating /var/lib/glusterd/groups/virt on each node and adding parameters
found here:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/chap-Quick_Start_Guide-Virtual_Preparation.html
solved
this issue.




Steve Dainard
Infrastructure Manager
Miovision Technologies Inc.
Phone: 519-513-2407 x250


On Mon, Jul 1, 2013 at 4:54 PM, Steve Dainard sdain...@miovision.comwrote:

 I'm using Ovirt nightly on Fedora 18 to determine if ovirt + gluster is
 something that will work for my organization (or at least when RHEV is
 released with this functionality). I'm attempting to use the nodes for both
 virt and gluster storage.

 I've successfully created gluster bricks on two hosts, ovirt001 
 ovirt002, and started volume vol1 through engine web ui. I've created a
 gluster storage domain, but cannot attach to data center.

 *Engine UI error:*
 Failed to attach Storage Domains to Data Center Default. (User:
 admin@internal)

 *Engine /var/log/ovirt-engine/engine.log errors:*
 2013-07-01 16:29:02,650 ERROR
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand]
 (pool-6-thread-49) Command GlusterServicesListVDS execution failed.
 Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: type
 'exceptions.Exception':method glusterServicesGet is not supported
 I've attached the much larger log file

 *Host
 /var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirt001\:vol1.log when
 attaching:*
 [2013-07-01 20:40:33.718871] I [afr-self-heal-data.c:655:afr_sh_data_fix]
 0-vol1-replicate-0: no active sinks for performing self-heal on file
 /b2076340-84de-45ff-9d4b-d0d48b935fca/dom_md/ids
 [2013-07-01 20:40:33.721059] W
 [client-rpc-fops.c:873:client3_3_writev_cbk] 0-vol1-client-0: remote
 operation failed: Invalid argument
 [2013-07-01 20:40:33.721104] W
 [client-rpc-fops.c:873:client3_3_writev_cbk] 0-vol1-client-1: remote
 operation failed: Invalid argument
 [2013-07-01 20:40:33.721130] W [fuse-bridge.c:2127:fuse_writev_cbk]
 0-glusterfs-fuse: 304: WRITE = -1 (Invalid argument)

 *Engine repos:*
 [root@ovirt-manager2 yum.repos.d]# ll
 total 16
 -rw-r--r--. 1 root root 1145 Dec 20  2012 fedora.repo
 -rw-r--r--. 1 root root 1105 Dec 20  2012 fedora-updates.repo
 -rw-r--r--. 1 root root 1163 Dec 20  2012 fedora-updates-testing.repo
 -rw-r--r--. 1 root root  144 Jun 21 18:34 ovirt-nightly.repo

 [root@ovirt-manager2 yum.repos.d]# cat *
 [fedora]
 name=Fedora $releasever - $basearch
 failovermethod=priority
 #baseurl=
 http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/
 mirrorlist=
 https://mirrors.fedoraproject.org/metalink?repo=fedora-$releaseverarch=$basearch
 enabled=1
 #metadata_expire=7d
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch

 [fedora-debuginfo]
 name=Fedora $releasever - $basearch - Debug
 failovermethod=priority
 #baseurl=
 http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/
 mirrorlist=
 https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releaseverarch=$basearch
 enabled=0
 metadata_expire=7d
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch

 [fedora-source]
 name=Fedora $releasever - Source
 failovermethod=priority
 #baseurl=
 http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/
 mirrorlist=
 https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releaseverarch=$basearch
 enabled=0
 metadata_expire=7d
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
 [updates]
 name=Fedora $releasever - $basearch - Updates
 failovermethod=priority
 #baseurl=
 http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/
 mirrorlist=
 https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releaseverarch=$basearch
 enabled=1
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch

 [updates-debuginfo]
 name=Fedora $releasever - $basearch - Updates - Debug
 failovermethod=priority
 #baseurl=
 http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/debug/
 mirrorlist=
 https://mirrors.fedoraproject.org/metalink?repo=updates-released-debug-f$releaseverarch=$basearch
 enabled=0
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch

 [updates-source]
 name=Fedora $releasever - Updates Source
 failovermethod=priority
 #baseurl=
 http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/SRPMS/
 mirrorlist=
 https://mirrors.fedoraproject.org/metalink?repo=updates-released-source-f$releaseverarch=$basearch
 enabled=0
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
 [updates-testing]
 name=Fedora $releasever - $basearch - Test Updates
 failovermethod=priority
 #baseurl=
 http://download.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/
 mirrorlist=
 

Re: [Users] Putting the host in maintenance mode via python SDK

2013-07-01 Thread Deepthi Dharwar
On 07/01/2013 06:08 PM, Michael Pasternak wrote:
 On 07/01/2013 02:37 PM, Vladimir Vyazmin wrote:
 Hi,

 There is any a via pythonSDK to put the host in maintenance mode:

 hostObject = API.hosts.get(fakeHostNames)

 turboMaintenanceHost(hostObject)

 # Turbo Maintenance Host #
 def turboMaintenanceHost(vmObject):
  maintenance Host
 if vmObject.status.state == 'up':
 vmObject.deactivate()
 LOGGER.info(TimestampScale() + Object '%s' maintenance. % 
 vmObject.name)
 else:
 LOGGER.warning(TimestampScale() + Failed maintenance '%s' Object. 
 % vmObject.name)
 return True


Thanks for this piece of code. Shall give it a try.




 - Original Message -
 From: Deepthi Dharwar deep...@linux.vnet.ibm.com
 To: users@ovirt.org, Michael Pasternak mpast...@redhat.com
 Sent: Monday, July 1, 2013 2:13:09 PM
 Subject: [Users] Putting the host in maintenance mode via python SDK

 Hi,

 I am trying to switch off Hosts in DC which are idle.

 I wanted to know as to why start/stop of hosts options are enabled in
 Power Mgmt tab only when the host is in Maintenance mode and not otherwise.
 
 cause you don't want to kill all vms running on this host when switching it 
 off,
 moving host to the maintenance mode will make them migrating to other host/s 
 in
 the cluster,
 
 actually there are much more scenarios where you want to be sure that you can
 safely switch off the host.

So as I understand, issuing a STOP just halts the host but not migrate
VMs . This needs to be done by putting the host in the maintenance mode
where all the VMs running are migrated. Host is disconnected from the
storage pool after which one can shut-down the system.

Maintenance is per-requisite mode where u explicitly tell the engine
to migrate the VMs and disconnect host from storage and not done as part
of integral shutdown process.


 Also, is there any way via pythonSDK to put the host in maintenance mode ?

 Regards,
 Deepthi

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users