Re: [ovirt-users] VM failover with ovirt3.5

2014-12-31 Thread Artyom Lukianov
Ok I found this one:
Thread-1807180::ERROR::2014-12-30 
13:02:52,164::migration::165::vm.Vm::(_recover) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to destroy remote VM
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 163, in _recover
self.destServer.destroy(self._vm.id)
AttributeError: 'SourceThread' object has no attribute 'destServer'
Thread-1807180::ERROR::2014-12-30 13:02:52,165::migration::259::vm.Vm::(run) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 229, in run
self._setupVdsConnection()
  File /usr/share/vdsm/virt/migration.py, line 92, in _setupVdsConnection
self._dst, self._vm.cif.bindings['xmlrpc'].serverPort)
  File /usr/lib/python2.7/site-packages/vdsm/vdscli.py, line 91, in 
cannonizeHostPort
return addr + ':' + port
TypeError: cannot concatenate 'str' and 'int' objects

We have bug that already verified for this one 
https://bugzilla.redhat.com/show_bug.cgi?id=1163771, so patch must be included 
in latest builds, but you can also take a look on patch, and edit files by 
yourself on all you machines and restart vdsm.

- Original Message -
From: cong yue yuecong1...@gmail.com
To: aluki...@redhat.com, stira...@redhat.com, users@ovirt.org
Cc: Cong Yue cong_...@alliedtelesis.com
Sent: Tuesday, December 30, 2014 8:22:47 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

The vdsm.log just after I turned the host where HE VM is to local.

In the log, there is some part like

---
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,988::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,989::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,990::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,675::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message StompFrame command='SEND'
JsonRpcServer::DEBUG::2014-12-30
13:01:04,676::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806995::DEBUG::2014-12-30
13:01:04,677::stompReactor::163::yajsonrpc.StompServer::(send) Sending
response
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,678::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message StompFrame command='SEND'
JsonRpcServer::DEBUG::2014-12-30
13:01:04,679::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806996::DEBUG::2014-12-30
13:01:04,681::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
---

I this with some wrong?

Thanks,
Cong


 From: Artyom Lukianov aluki...@redhat.com
 Date: 2014年12月29日 23:13:45 GMT-8
 To: Yue, Cong cong_...@alliedtelesis.com
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
 users@ovirt.org
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 HE vm migrated only by ovirt-ha-agent and not by engine, but FatalError it's
 more interesting, can you provide vdsm.log for this one please.

 - Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
 Sent: Monday, December 29, 2014 8:29:04 PM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 I disabled local maintenance mode for all hosts, and then only set the host
 where HE VM is there to local maintenance mode. The logs are as follows.
 During the migration of HE VM , it shows some fatal error happen. By the
 way, also HE VM can not work with live migration. Instead, other VMs can do
 live migration.

 ---
 [root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local
 You have new mail in /var/spool/mail/root
 [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
 MainThread::INFO::2014-12-29
 13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best remote host 10.0.0.92 (id: 3, score: 2400)
 MainThread::INFO::2014-12-29
 13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Current state EngineUp (score: 2400)
 MainThread::INFO::2014-12-29
 13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best remote host 10.0.0.92 (id: 3, score: 2400)
 MainThread::INFO::2014-12-29
 13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Current state EngineUp (score: 2400)
 MainThread::INFO::2014-12-29
 

Re: [ovirt-users] HostedEngine Deployment Woes

2014-12-31 Thread Yedidyah Bar David
- Original Message -
 From: Mikola Rose mr...@power-soft.com
 To: users@ovirt.org
 Sent: Tuesday, December 30, 2014 2:12:52 AM
 Subject: [ovirt-users] HostedEngine Deployment Woes
 
 
 Hi List Members;
 
 I have been struggling with deploying oVirt hosted engine I keep running into
 a timeout during the Misc Configuration any suggestion on how I can
 trouble shoot this?
 
 Redhat 2.6.32-504.3.3.el6.x86_64
 
 Installed Packages
 ovirt-host-deploy.noarch 1.2.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
 ovirt-host-deploy-java.noarch 1.2.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
 ovirt-hosted-engine-ha.noarch 1.1.6-3.el6ev @rhel-6-server-rhevm-3.4-rpms
 ovirt-hosted-engine-setup.noarch 1.1.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
 rhevm-setup-plugin-ovirt-engine.noarch 3.4.4-2.2.el6ev
 @rhel-6-server-rhevm-3.4-rpms
 rhevm-setup-plugin-ovirt-engine-common.noarch 3.4.4-2.2.el6ev
 @rhel-6-server-rhevm-3.4-rpms

So this is RHEV (3.4) or ovirt?

 
 
 Please confirm installation settings (Yes, No)[No]: Yes
 [ INFO ] Stage: Transaction setup
 [ INFO ] Stage: Misc configuration
 [ INFO ] Stage: Package installation
 [ INFO ] Stage: Misc configuration
 [ INFO ] Configuring libvirt
 [ INFO ] Configuring VDSM
 [ INFO ] Starting vdsmd
 [ INFO ] Waiting for VDSM hardware info
 [ INFO ] Waiting for VDSM hardware info
 [ INFO ] Connecting Storage Domain
 [ INFO ] Connecting Storage Pool
 [ INFO ] Verifying sanlock lockspace initialization
 [ INFO ] sanlock lockspace already initialized
 [ INFO ] sanlock metadata already initialized
 [ INFO ] Creating VM Image
 [ INFO ] Disconnecting Storage Pool
 [ INFO ] Start monitoring domain
 [ ERROR ] Failed to execute stage 'Misc configuration': The read operation
 timed out
 [ INFO ] Stage: Clean up
 [ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
 [ INFO ] Stage: Pre-termination
 [ INFO ] Stage: Termination
 
 
 
 2014-12-29 14:53:41 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
 lockspace._misc:133 Ensuring lease for lockspace hosted-engine, host id 1 is
 acquired (file:
 /rhev/data-center/mnt/192.168.0.75:_Volumes_Raid1/8094d528-7aa2-4c28-839f-73d7c8bcfebb/ha_agent/hosted-engine.lockspace)
 2014-12-29 14:53:41 INFO
 otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
 lockspace._misc:144 sanlock lockspace already initialized
 2014-12-29 14:53:41 INFO
 otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
 lockspace._misc:157 sanlock metadata already initialized
 2014-12-29 14:53:41 DEBUG otopi.context context._executeMethod:138 Stage misc
 METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._misc
 2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_setup.vm.image
 image._misc:162 Creating VM Image
 2014-12-29 14:53:41 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image
 image._misc:163 createVolume
 2014-12-29 14:53:42 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image
 image._misc:184 Created volume d8e7eed4-c763-4b3d-8a71-35f2d692a73d, request
 was:
 - image: 9043e535-ea94-41f8-98df-6fdbfeb107c3
 - volume: e6a9291d-ac21-4a95-b43c-0d6e552baaa2
 2014-12-29 14:53:42 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48
 Waiting for existing tasks to complete
 2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48
 Waiting for existing tasks to complete
 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage misc
 METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._misc
 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:144 condition
 False
 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage misc
 METHOD
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._disconnect_pool
 2014-12-29 14:53:43 INFO
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._disconnect_pool:971 Disconnecting Storage Pool
 2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48
 Waiting for existing tasks to complete
 2014-12-29 14:53:43 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:602
 spmStop
 2014-12-29 14:53:43 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:611
 2014-12-29 14:53:43 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._storagePoolConnection:573 disconnectStoragePool
 2014-12-29 14:53:45 INFO
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._disconnect_pool:975 Start monitoring domain
 2014-12-29 14:53:45 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._startMonitoringDomain:529 _startMonitoringDomain
 2014-12-29 14:53:46 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._startMonitoringDomain:534 {'status': {'message': 'OK', 'code': 0}}
 2014-12-29 14:53:51 DEBUG otopi.ovirt_hosted_engine_setup.tasks
 tasks.wait:127 Waiting for domain monitor
 2014-12-29 14:54:51 DEBUG otopi.context context._executeMethod:152 method
 

Re: [ovirt-users] feedback-on-oVirt-engine-3.5.0.1-1.el6

2014-12-31 Thread Yedidyah Bar David
- Original Message -
 From: bingozhou2013 bingozhou2...@hotmail.com
 To: users users@ovirt.org
 Sent: Friday, December 26, 2014 5:33:57 AM
 Subject: [ovirt-users] feedback-on-oVirt-engine-3.5.0.1-1.el6
 
 Dear Sir,
 When I try to install the oVirt-engine 3.5 in the CentOS 6.6 .Below error is
 show :
 -- Finished Dependency Resolution
 Error: Package: ovirt-engine-backend-3.5.0.1-1.el6.noarch (ovirt-3.5)
 Requires: novnc
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
 I have added the EPEL source and installed ovirt-release35.rpm ,but it still
 shows requires:novnc . Please help me to check this .Thank you very much !

Hopefully fixed, even if the current fix is only temporary. See [1], [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1177290
[2] http://lists.ovirt.org/pipermail/users/2014-December/030317.html

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine --deploy fails

2014-12-31 Thread Yedidyah Bar David
- Original Message -
 From: Andreas Mather andr...@allaboutapps.at
 To: users@ovirt.org
 Sent: Wednesday, December 24, 2014 11:29:58 PM
 Subject: Re: [ovirt-users] hosted-engine --deploy fails
 
 Hi All!
 
 Just did more research on this and it seems as if the reason was related to
 my interface configuration. Disclaimer upfront: I've a public IP configured
 on this server (since it's a hosted root server), but changed the IP addr
 here to 192.168.0.99
 
 I started with the output from ´vdsm-tool restore-nets':
 ipv4addr, prefix = addr['address'].split('/')
 ValueError: need more than 1 value to unpack
 
 So I dumped the addr dictionary:
 {'address': '192.168.0.99',
 'family': 'inet',
 'flags': frozenset(['permanent']),
 'index': 2,
 'label': 'eth0',
 'prefixlen': 32,
 'scope': 'universe'}
 
 I've no clue why there's no /32 at the end, but that's what my netmask
 actually is due to the special configuration I got from my hosting provider:
 
 [root@vhost1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
 DEVICE=eth0
 BOOTPROTO=none
 ONBOOT=yes
 HWADDR=00:52:9F:A8:AA:BB
 IPADDR=192.168.0.99
 NETMASK=255.255.255.255
 SCOPE=peer 192.168.0.1
 
 (again, public IPs changed to private one, if that matters. And I skipped the
 IPv6 config above...)
 
 So what I did next, was to patch the netinfo.py:
 [root@vhost1 vdsm]# diff -u netinfo_orig.py netinfo.py
 --- netinfo_orig.py 2014-12-24 22:16:23.362198715 +0100
 +++ netinfo.py 2014-12-24 22:16:02.567625247 +0100
 @@ -368,7 +368,12 @@
 if addr['family'] == 'inet':
 ipv4addrs.append(addr['address'])
 if 'secondary' not in addr['flags']:
 - ipv4addr, prefix = addr['address'].split('/')
 + Assume /32 if no prefix was found
 + if addr['address'].find('/') == -1:
 + ipv4addr = addr['address']
 + prefix = 32
 + else:
 + ipv4addr, prefix = addr['address'].split('/')
 ipv4netmask = prefix2netmask(addr['prefixlen'])
 else:
 ipv6addrs.append(addr['address'])
 
 
 and recompiled it:
 [root@vhost1 vdsm]# python -m py_compile netinfo.py
 
 
 
 Et voilà:
 vdsm-tool ran fine:
 `hosted-engine --deploy' passed the previous failing stage!

Thanks for great analysis, report and patch!
Would you like to push it to gerrit? See [1] and [2]

Adding Dan in case you do not want to, so that your patch isn't lost...

 
 Hope this helps to find the root cause

Not sure what you mean - did you have any other problem after
applying your patch? Seems to me that the root cause is some
code (the part you patched or something earlier) did not expect
a prefix of /32, which is indeed quite rare. Not even certain
how it works - did you also get a default gateway? How can you
access it, if it's not in your subnet?

[1] http://www.ovirt.org/Develop
[2] http://www.ovirt.org/Working_with_oVirt_Gerrit

Best regards,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] engine-iso-uploader unexpected behaviour

2014-12-31 Thread Steve Atkinson
When attempting use the engine-iso-uploader to drop ISOs in my iso storage
domain I get the following results.

Using engine-iso-uploader --iso-domain=[domain] upload [iso] does not work
because the engine does not have access to our storage network. So it
attempts to mount to an address that is not routable. I thought to resolve
this by adding an interfaces to the Hosted Engine, only to find that I
cannot modify the Engine's VM config from the GUI. I receive the
error: Cannot add Interface. This VM is not managed by the engine.
Actually, I get that error whenever I attempt to modify anything about the
engine. Maybe this is expected behavior? I can't find any bestpractices
regarding Hosted Engine administration.

Alternatively, using engine-iso-uploader --nfs-server=[path] upload [iso]
--verbose returns the following error:

ERROR: local variable 'domain_type' referenced before assignment
INFO: Use the -h option to see usage.
DEBUG: Configuration:
DEBUG: command: upload
DEBUG: Traceback (most recent call last):
DEBUG:   File /usr/bin/engine-iso-uploader, line 1440, in module
DEBUG: isoup = ISOUploader(conf)
DEBUG:   File /usr/bin/engine-iso-uploader, line 455, in __init__
DEBUG: self.upload_to_storage_domain()
DEBUG:   File /usr/bin/engine-iso-uploader, line 1089, in
upload_to_storage_domain
DEBUG: elif domain_type in ('localfs', ):
DEBUG: UnboundLocalError: local variable 'domain_type' referenced before
assignment

Engine is Self Hosted and is Version 3.5.0.1-1.el6.

Thanks!
-Steve
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-iso-uploader unexpected behaviour

2014-12-31 Thread Yedidyah Bar David
- Original Message -
 From: Steve Atkinson satkin...@telvue.com
 To: users@ovirt.org
 Sent: Wednesday, December 31, 2014 7:15:23 PM
 Subject: [ovirt-users] engine-iso-uploader unexpected behaviour
 
 When attempting use the engine-iso-uploader to drop ISOs in my iso storage
 domain I get the following results.
 
 Using engine-iso-uploader --iso-domain=[domain] upload [iso] does not work
 because the engine does not have access to our storage network. So it
 attempts to mount to an address that is not routable. I thought to resolve
 this by adding an interfaces to the Hosted Engine, only to find that I
 cannot modify the Engine's VM config from the GUI. I receive the error:
 Cannot add Interface. This VM is not managed by the engine. Actually, I get
 that error whenever I attempt to modify anything about the engine. Maybe
 this is expected behavior? I can't find any bestpractices regarding Hosted
 Engine administration.
 
 Alternatively, using engine-iso-uploader --nfs-server=[path] upload [iso]
 --verbose returns the following error:
 
 ERROR: local variable 'domain_type' referenced before assignment
 INFO: Use the -h option to see usage.
 DEBUG: Configuration:
 DEBUG: command: upload
 DEBUG: Traceback (most recent call last):
 DEBUG: File /usr/bin/engine-iso-uploader, line 1440, in module
 DEBUG: isoup = ISOUploader(conf)
 DEBUG: File /usr/bin/engine-iso-uploader, line 455, in __init__
 DEBUG: self.upload_to_storage_domain()
 DEBUG: File /usr/bin/engine-iso-uploader, line 1089, in
 upload_to_storage_domain
 DEBUG: elif domain_type in ('localfs', ):
 DEBUG: UnboundLocalError: local variable 'domain_type' referenced before
 assignment

Do you run it from the engine's machine? The host? Elsewhere?
Where is the iso domain?

This sounds to me like a bug, but I didn't check the sources yet.
Please open one. Thanks.

That said, you can simply copy your iso file directly to the correct
directory inside the iso domain, which is:

/path-to-iso-domain/SOME-UUID/images/----/

Make sure it's readable to vdsm:kvm (36:36).

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)

2014-12-31 Thread Steve Atkinson
On Wed, Dec 31, 2014 at 12:47 PM, Yedidyah Bar David d...@redhat.com
wrote:

 - Original Message -
  From: Steve Atkinson satkin...@telvue.com
  To: users@ovirt.org
  Sent: Wednesday, December 31, 2014 7:15:23 PM
  Subject: [ovirt-users] engine-iso-uploader unexpected behaviour
 
  When attempting use the engine-iso-uploader to drop ISOs in my iso
 storage
  domain I get the following results.
 
  Using engine-iso-uploader --iso-domain=[domain] upload [iso] does not
 work
  because the engine does not have access to our storage network. So it
  attempts to mount to an address that is not routable. I thought to
 resolve
  this by adding an interfaces to the Hosted Engine, only to find that I
  cannot modify the Engine's VM config from the GUI. I receive the error:
  Cannot add Interface. This VM is not managed by the engine. Actually, I
 get
  that error whenever I attempt to modify anything about the engine. Maybe
  this is expected behavior? I can't find any bestpractices regarding
 Hosted
  Engine administration.
 
  Alternatively, using engine-iso-uploader --nfs-server=[path] upload [iso]
  --verbose returns the following error:
 
  ERROR: local variable 'domain_type' referenced before assignment
  INFO: Use the -h option to see usage.
  DEBUG: Configuration:
  DEBUG: command: upload
  DEBUG: Traceback (most recent call last):
  DEBUG: File /usr/bin/engine-iso-uploader, line 1440, in module
  DEBUG: isoup = ISOUploader(conf)
  DEBUG: File /usr/bin/engine-iso-uploader, line 455, in __init__
  DEBUG: self.upload_to_storage_domain()
  DEBUG: File /usr/bin/engine-iso-uploader, line 1089, in
  upload_to_storage_domain
  DEBUG: elif domain_type in ('localfs', ):
  DEBUG: UnboundLocalError: local variable 'domain_type' referenced before
  assignment

 Do you run it from the engine's machine? The host? Elsewhere?
 Where is the iso domain?

 This sounds to me like a bug, but I didn't check the sources yet.
 Please open one. Thanks.

 That said, you can simply copy your iso file directly to the correct
 directory inside the iso domain, which is:

 /path-to-iso-domain/SOME-UUID/images/----/

 Make sure it's readable to vdsm:kvm (36:36).

 Best,
 --
 Didi


I'm running the command from the Engine itself, it seems to be the only box
that has this command available. The ISO domain utilizes the same root
share as the DATA and EXPORT domains which seem to work fine. The structure
looks something like:

server:/nfs-share/ovirt-store/iso-store/UUIDblahblah/images/
server:/nfs-share/ovirt-store/export-store/path/path/path
server:/nfs-share/ovirt-store/data-store/UUIDblahblah/images/

Each of storage-domain was created through the Engine. Perms for each
below, which persists all the way down the tree:
drwxr-xr-x.  3 vdsm kvm 4 Nov 14 19:02 data-store
drwxr-xr-x.  3 vdsm kvm 4 Nov 14 19:04 export-store
drwxr-xr-x.  3 vdsm kvm 4 Nov 14 18:18 hosted-engine
drwxr-xr-x.  3 vdsm kvm 4 Nov 14 19:04 iso-store

If I attempt to mount any of them via NFS from our management network they
work just fine. (moved around, read/write operations)
Copied the ISO I needed directly to it and changed the perms/ownership by
hand which seems to have worked as a short term solution.

I can see why the --iso-domain argument has issues as it is trying to use
the our storage network, which isn't routable from the Engine as it only
has the one network interface. Although that does seem like an oversight.
Seems like this transfer should pass through the SPM and not try to
directly mount the NFS share if the --iso-domain flag is used.

Thanks for the quick response.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)

2014-12-31 Thread Yedidyah Bar David
- Original Message -
 From: Steve Atkinson satkin...@telvue.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, December 31, 2014 8:22:14 PM
 Subject: Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)
 If I attempt to mount any of them via NFS from our management network they
 work just fine. (moved around, read/write operations)
 Copied the ISO I needed directly to it and changed the perms/ownership by
 hand which seems to have worked as a short term solution.

Good. Not sure it's that short term - it was suggested several times
here and people seem to use it.

 
 I can see why the --iso-domain argument has issues as it is trying to use
 the our storage network, which isn't routable from the Engine as it only
 has the one network interface. Although that does seem like an oversight.
 Seems like this transfer should pass through the SPM and not try to
 directly mount the NFS share if the --iso-domain flag is used.

Indeed this is a somewhat complex subject, which is why it was somewhat
neglected so far. There is some work to fix this recently, see e.g.:

https://bugzilla.redhat.com/show_bug.cgi?id=1122970

http://lists.ovirt.org/pipermail/devel/2014-December/thread.html#9481
http://lists.ovirt.org/pipermail/devel/2014-December/thread.html#9565
(yes, two separate discussions this month).

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)

2014-12-31 Thread Steve Atkinson
Ok well thanks for the help. Moving it by hand is not a huge deal. I just
assumed that since it was recommended to use the command in the docs that
it was worth mentioning. case closed I guess. unless we want to consider
the NFS error worthy of a bug report?
On Dec 31, 2014 3:34 PM, Yedidyah Bar David d...@redhat.com wrote:

 - Original Message -
  From: Steve Atkinson satkin...@telvue.com
  To: Yedidyah Bar David d...@redhat.com
  Cc: users@ovirt.org
  Sent: Wednesday, December 31, 2014 8:22:14 PM
  Subject: Re: [ovirt-users] engine-iso-uploader unexpected behaviour
 (steve a)
  If I attempt to mount any of them via NFS from our management network
 they
  work just fine. (moved around, read/write operations)
  Copied the ISO I needed directly to it and changed the perms/ownership by
  hand which seems to have worked as a short term solution.

 Good. Not sure it's that short term - it was suggested several times
 here and people seem to use it.

 
  I can see why the --iso-domain argument has issues as it is trying to use
  the our storage network, which isn't routable from the Engine as it only
  has the one network interface. Although that does seem like an oversight.
  Seems like this transfer should pass through the SPM and not try to
  directly mount the NFS share if the --iso-domain flag is used.

 Indeed this is a somewhat complex subject, which is why it was somewhat
 neglected so far. There is some work to fix this recently, see e.g.:

 https://bugzilla.redhat.com/show_bug.cgi?id=1122970

 http://lists.ovirt.org/pipermail/devel/2014-December/thread.html#9481
 http://lists.ovirt.org/pipermail/devel/2014-December/thread.html#9565
 (yes, two separate discussions this month).

 Best,
 --
 Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)

2014-12-31 Thread Yedidyah Bar David
- Original Message -
 From: Steve Atkinson satkin...@telvue.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, December 31, 2014 10:43:23 PM
 Subject: Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)
 
 Ok well thanks for the help. Moving it by hand is not a huge deal. I just
 assumed that since it was recommended to use the command in the docs that
 it was worth mentioning. case closed I guess. unless we want to consider
 the NFS error worthy of a bug report?

You mean the attempt to nfs mount the storage directly from the engine?
It has an option to use ssh instead, you can try that too if you feel
curious. See the man page.

If it's something else, I missed it - please give more details.

Of course you can open a bug if you want. I guess it will be low priority
and will probably solved eventually by reimplementing the command line
tool using the work that will be done for the upload from gui bug.
Still, detailing your structure/flow will be useful as a test case.

Thanks and best regards,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)

2014-12-31 Thread Steve Atkinson
in our case our nas doesn't support scp but that no fault of the the
manager.

I was mentioning more specifically that the --nfs-server method seemed to
throw an error on an undefined attribute 'domaintype' that is not something
that can be supplied in the config file or passed as an argument. the out
put from the the debug is further up in my first email.

you are right though. probably not worth it since the best solution would
be a gui implementation.

thanks again.
On Dec 31, 2014 4:10 PM, Yedidyah Bar David d...@redhat.com wrote:

 - Original Message -
  From: Steve Atkinson satkin...@telvue.com
  To: Yedidyah Bar David d...@redhat.com
  Cc: users@ovirt.org
  Sent: Wednesday, December 31, 2014 10:43:23 PM
  Subject: Re: [ovirt-users] engine-iso-uploader unexpected behaviour
 (steve a)
 
  Ok well thanks for the help. Moving it by hand is not a huge deal. I just
  assumed that since it was recommended to use the command in the docs that
  it was worth mentioning. case closed I guess. unless we want to consider
  the NFS error worthy of a bug report?

 You mean the attempt to nfs mount the storage directly from the engine?
 It has an option to use ssh instead, you can try that too if you feel
 curious. See the man page.

 If it's something else, I missed it - please give more details.

 Of course you can open a bug if you want. I guess it will be low priority
 and will probably solved eventually by reimplementing the command line
 tool using the work that will be done for the upload from gui bug.
 Still, detailing your structure/flow will be useful as a test case.

 Thanks and best regards,
 --
 Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)

2014-12-31 Thread Yedidyah Bar David
- Original Message -
 From: Steve Atkinson satkin...@telvue.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, December 31, 2014 11:18:39 PM
 Subject: Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)
 
 in our case our nas doesn't support scp but that no fault of the the
 manager.
 
 I was mentioning more specifically that the --nfs-server method seemed to
 throw an error on an undefined attribute 'domaintype' that is not something
 that can be supplied in the config file or passed as an argument. the out
 put from the the debug is further up in my first email.

Sorry.

You wrote:
 Alternatively, using engine-iso-uploader --nfs-server=[path] upload [iso]
 --verbose returns the following error:

 ERROR: local variable 'domain_type' referenced before assignment
 INFO: Use the -h option to see usage.
 DEBUG: Configuration:
 DEBUG: command: upload
 DEBUG: Traceback (most recent call last):
 DEBUG: File /usr/bin/engine-iso-uploader, line 1440, in module
 DEBUG: isoup = ISOUploader(conf)
 DEBUG: File /usr/bin/engine-iso-uploader, line 455, in __init__
 DEBUG: self.upload_to_storage_domain()
 DEBUG: File /usr/bin/engine-iso-uploader, line 1089, in
 upload_to_storage_domain
 DEBUG: elif domain_type in ('localfs', ):
 DEBUG: UnboundLocalError: local variable 'domain_type' referenced before
 assignment

That's indeed a bug. Now pushed a one-line fix [1]. If you can, please
try and it report. Thanks!

[1] http://gerrit.ovirt.org/36499

Feel free to also open a bz for inclusion in some version earlier than 3.6
(our current master branch).

 
 you are right though. probably not worth it since the best solution would
 be a gui implementation.

That's not what I meant - a gui is obviously more comfortable for casual
use, but for batch/scripting etc you want a command line tool. But
hopefully our implementation for the gui uploader will be first in the
rest api, then in the gui. The rest api will be useful also for scripting -
either directly or by reimplementing the current tool with it.

Thanks and best regards,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 1. Re: ??: bond mode balance-alb (Jorick Astrego)

2014-12-31 Thread Christopher Young
I'm a little confused by your explanation of 'just do the bonding at the
guest level'.  I apologize for my ignorance here, but I'm trying to prepare
myself for a similar configuration where I'm going to need to get all much
bandwidth out of the bond as possible.  How would bonding multiple
interfaces at the VM level provide a better balance than at the hypervisor
level?  Wouldn't the traffic more or less end up traveling the same path
regardless of the virtual interface?

I'm trying to plan out an oVirt implementation where I would like to bond
multiple interfaces on my hypervisor nodes for balancing/redundancy, and
I'm very curious what others have done with Cisco hardware (in my case, a
pair of 3650's with MEC) in order to get the best solution.

I will read through these threads and see if I can gain a better
understanding, but if you happen to have an easy explanation that would
help my understand, I would greatly appreciate it.


On Wed, Dec 31, 2014 at 1:01 AM, Blaster blas...@556nato.com wrote:


 Thanks for your thoughts.  The problem is, most of the data is transmitted
 from a couple apps to a couple systems.  The chance of a hash collision
 (i.e., most of the data going out the same interface anyway) is quite
 high.  On Solaris, I just created two physical interfaces each with their
 own IP, and bound the apps to the appropriate interfaces.  This worked
 great.  Imagine my surprise when I discovered this doesn’t work on Linux
 and my crash course on weak host models.

 Interesting that no one commented on my thought to just do the bonding at
 the guest level (and use balance-alb) instead of at the hypervisor level.
 Some ESXi experts I have talked to say this is actually the preferred
 method with ESXi and not to do it at the hypervisor level, as the VM knows
 better than VMware.

 Or is the bonding mode issue with balance-alb/tlb more with the Linux TCP
 stack  itself and not with oVirt and VDSM?



 On Dec 30, 2014, at 4:34 AM, Nikolai Sednev nsed...@redhat.com wrote:

 Mode 2 will do the job the best way for you in case of static LAG
 supported only at the switch's side, I'd advise using of xmit_hash_policy
 layer3+4, so you'll get better distribution for your DC.


 Thanks in advance.

 Best regards,
 Nikolai
 
 Nikolai Sednev
 Senior Quality Engineer at Compute team
 Red Hat Israel
 34 Jerusalem Road,
 Ra'anana, Israel 43501

 Tel:   +972   9 7692043
 Mobile: +972 52 7342734
 Email: nsed...@redhat.com
 IRC: nsednev

 --
 *From: *users-requ...@ovirt.org
 *To: *users@ovirt.org
 *Sent: *Tuesday, December 30, 2014 2:12:58 AM
 *Subject: *Users Digest, Vol 39, Issue 173

 Send Users mailing list submissions to
 users@ovirt.org

 To subscribe or unsubscribe via the World Wide Web, visit
 http://lists.ovirt.org/mailman/listinfo/users
 or, via email, send a message with subject or body 'help' to
 users-requ...@ovirt.org

 You can reach the person managing the list at
 users-ow...@ovirt.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Users digest...


 Today's Topics:

1. Re:  ??: bond mode balance-alb (Jorick Astrego)
2. Re:  ??: bond mode balance-alb (Jorick Astrego)
3.  HostedEngine Deployment Woes (Mikola Rose)


 --

 Message: 1
 Date: Mon, 29 Dec 2014 20:13:40 +0100
 From: Jorick Astrego j.astr...@netbulae.eu
 To: users@ovirt.org
 Subject: Re: [ovirt-users] ??: bond mode balance-alb
 Message-ID: 54a1a7e4.90...@netbulae.eu
 Content-Type: text/plain; charset=utf-8


 On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
  On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
  On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
  Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM
 networks
  https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
  Dan,
 
  What is bad about these modes that oVirt can't use them?
  I can only quote jpirko's workds from the link above:
 
  Do not use tlb or alb in bridge, never! It does not work, that's it.
 The reason
  is it mangles source macs in xmit frames and arps. When it is
 possible, just
  use mode 4 (lacp). That should be always possible because all
 enterprise
  switches support that. Generally, for 99% of use cases, you *should*
 use mode
  4. There is no reason to use other modes.
 
 This switch is more of an office switch and only supports part of the
 802.3ad standard:


 PowerConnect* *2824

 Scalable from small workgroups to dense access solutions, the 2824
 offers 24-port flexibility plus two combo small?form?factor
 pluggable (SFP) ports for connecting the switch to other networking
 equipment located beyond the 100 m distance limitations of copper
 cabling.

 Industry-standard link aggregation adhering to IEEE 802.3ad
 standards (static support only, LACP not supported)


 So 

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-31 Thread Yue, Cong
Thanks for the advice. I applied the patch for clientIF.py as
- port = config.getint('addresses', 'management_port')
+ port = config.get('addresses', 'management_port')

Now there is no fatal error in beam.log, also migration can start to happen 
when I set the host where HE VM is to be local maintenance mode. But it finally 
fail with the following log. Also HE VM can not be done with live migration in 
my environment.

MainThread::INFO::2014-12-31
19:08:06,197::states::759::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Continuing to monitor migration
MainThread::INFO::2014-12-31
19:08:06,430::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineMigratingAway (score: 2000)
MainThread::INFO::2014-12-31
19:08:06,430::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::ERROR::2014-12-31
19:08:16,490::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
Failed to migrate
Traceback (most recent call last):
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py,
line 863, in _monitor_migration
   vm_id,
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py,
line 85, in run_vds_client_cmd
   response['status']['message'])
DetailedError: Error 47 from migrateStatus: Migration canceled
MainThread::INFO::2014-12-31
19:08:16,501::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1420070896.5 type=state_transition
detail=EngineMigratingAway-ReinitializeFSM hostname='compute2-3'
MainThread::INFO::2014-12-31
19:08:16,502::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineMigratingAway-ReinitializeFSM) sent? ignored
MainThread::INFO::2014-12-31
19:08:16,805::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state ReinitializeFSM (score: 0)
MainThread::INFO::2014-12-31
19:08:16,805::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)

Besides, I had a try for other VMs instead of HE VM, but the failover( also no 
start to try migrating) happen. I set HA for those VMs. Is there some log I can 
check for this?

Please kindly advise.

Thanks,
Cong


 On 2014/12/31, at 0:14, Artyom Lukianov aluki...@redhat.com wrote:

 Ok I found this one:
 Thread-1807180::ERROR::2014-12-30 
 13:02:52,164::migration::165::vm.Vm::(_recover) 
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to destroy remote VM
 Traceback (most recent call last):
 File /usr/share/vdsm/virt/migration.py, line 163, in _recover
  self.destServer.destroy(self._vm.id)
 AttributeError: 'SourceThread' object has no attribute 'destServer'
 Thread-1807180::ERROR::2014-12-30 13:02:52,165::migration::259::vm.Vm::(run) 
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
 Traceback (most recent call last):
 File /usr/share/vdsm/virt/migration.py, line 229, in run
  self._setupVdsConnection()
 File /usr/share/vdsm/virt/migration.py, line 92, in _setupVdsConnection
  self._dst, self._vm.cif.bindings['xmlrpc'].serverPort)
 File /usr/lib/python2.7/site-packages/vdsm/vdscli.py, line 91, in 
 cannonizeHostPort
  return addr + ':' + port
 TypeError: cannot concatenate 'str' and 'int' objects

 We have bug that already verified for this one 
 https://bugzilla.redhat.com/show_bug.cgi?id=1163771, so patch must be 
 included in latest builds, but you can also take a look on patch, and edit 
 files by yourself on all you machines and restart vdsm.

 - Original Message -
 From: cong yue yuecong1...@gmail.com
 To: aluki...@redhat.com, stira...@redhat.com, users@ovirt.org
 Cc: Cong Yue cong_...@alliedtelesis.com
 Sent: Tuesday, December 30, 2014 8:22:47 PM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 The vdsm.log just after I turned the host where HE VM is to local.

 In the log, there is some part like

 ---
 GuestMonitor-HostedEngine::DEBUG::2014-12-30
 13:01:03,988::vm::486::vm.Vm::(_getUserCpuTuneInfo)
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
 set
 GuestMonitor-HostedEngine::DEBUG::2014-12-30
 13:01:03,989::vm::486::vm.Vm::(_getUserCpuTuneInfo)
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
 set
 GuestMonitor-HostedEngine::DEBUG::2014-12-30
 13:01:03,990::vm::486::vm.Vm::(_getUserCpuTuneInfo)
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
 set
 JsonRpc (StompReactor)::DEBUG::2014-12-30
 13:01:04,675::stompReactor::98::Broker.StompAdapter::(handle_frame)
 Handling message StompFrame command='SEND'
 JsonRpcServer::DEBUG::2014-12-30
 13:01:04,676::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
 Waiting for request