Re: [ovirt-devel] [vdsm] Infrastructure design for node (host) devices

2014-07-01 Thread Michal Skrivanek

On Jun 29, 2014, at 16:55 , Saggi Mizrahi smizr...@redhat.com wrote:

 
 
 - Original Message -
 From: Martin Polednik mpole...@redhat.com
 To: devel@ovirt.org
 Sent: Tuesday, June 24, 2014 1:26:17 PM
 Subject: [ovirt-devel] [vdsm] Infrastructure design for node (host) devices
 
 Hello,
 
 I'm actively working on getting host device passthrough (pci, usb and scsi)
 exposed in VDSM, but I've encountered growing complexity of this feature.
 
 The devices are currently created in the same manner as virtual devices and
 their reporting is done via hostDevices list in getCaps. As I implemented
 usb and scsi devices, the size of this list grew almost twice - and that is
 on a laptop.
 There should be a separate verb with ability to filter by type.

+1

 
 Similar problem is with the devices themselves, they are closely tied to host
 and currently, engine would have to keep their mapping to VMs, reattach back
 loose devices and handle all of this in case of migration.
 Migration sound very complicated, especially at the phase where the VM 
 actually
 starts running on the target host. The hardware state is completely different
 but the guest OS wouldn't have any idea that happened.
 So detaching before migration and than reattaching on the destination is a 
 must
 but that could cause issues in the guest. I'd imaging that this would be an 
 issue
 when hibernating on one host and waking up on another.

If qemu actually supports this at all it would need to be very specific for 
each device, restoring/setting a concrete HW state is a challenging task.
I would also see it as pin to host and then on specific cases detachattach (or 
that sr-iov's fancy temporary emulated device)
 
 
 I would like to hear your opinion on building something like host device pool
 in VDSM. The pool would be populated and periodically updated (to handle
 hot(un)plugs) and VMs/engine could query it for free/assigned/possibly
 problematic
 devices (which could be reattached by the pool). This has added benefit of
 requiring fewer libvirt calls, but a bit more complexity and possibly one
 thread.
 The persistence of the pool on VDSM restart could be kept in config or
 constructed
 from XML.
 I'd much rather VDSM not cache state unless this is absolutely necessary.
 This sounds like something that doesn't need to be queried every 3 seconds
 so it's best if we just get to ask libvirt.

well, unless we try to persist it a cache doesn't hurt
I don't see a particular problem in reconstructing the structures on startup

 
 I do wonder how that kind of thing can be configured in the VM creation
 phase as you would sometimes want to just specify a type of device and
 sometimes specify a specific one. Also, I'd assume there will be a
 fallback policy stating if the VM should run if said resource is unavailable.
 
 I'd need new API verbs to allow engine to communicate with the pool,
 possibly leaving caps as they are and engine could detect the presence of
 newer
 vdsm by presence of these API verbs.
 Again, I think that getting a list of devices filterable by kind\type might
 be best than a real pool. We might want to return if a device is in use
 (could also be in use by the host operating system and not just VMs)
 The vmCreate call would remain almost
 the
 same, only with the addition of new device for VMs (where the detach and
 tracking
 routine would be communicated with the pool).
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
 
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] Test day: gluster install

2014-07-01 Thread Piotr Kliczewski
I stated to test gluster related features and noticed issue after installation.
I performed following steps on my f20 using xmlrpc:
1. Installed ovirt 3.5 repo.
2. Installed engine
3. Installed vdsm on the same host - status UP
4. Removed vdsm
5. Enabled gluster service
6. Installed vdsm again (tried several times with the same result)

Here is the output that I get:
I can see gluserd and glusterfsd services being active.

Engine:
2014-07-01 10:38:53,722 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: Host fedora's
following network(s) are not synchronized with their Logical Network
configuration: ovirtmgmt.

vdsm:

Thread-13::DEBUG::2014-07-01
10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
('glusterfs-geo-replication',) not found

Thread-13::ERROR::2014-07-01
10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
occured
Traceback (most recent call last):
  File /usr/share/vdsm/rpc/BindingXMLRPC.py, line 1110, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/gluster/api.py, line 54, in wrapper
rv = func(*args, **kwargs)
  File /usr/share/vdsm/gluster/api.py, line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File /usr/share/vdsm/supervdsm.py, line 50, in __call__
return callMethod()
  File /usr/share/vdsm/supervdsm.py, line 48, in lambda
**kwargs)
  File string, line 2, in glusterPeerStatus
  File /usr/lib64/python2.7/multiprocessing/managers.py, line 773,
in _callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.

Can someone help me understand what am I missing or confirm to open a BZ?

Thanks,
Piotr
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Kanagaraj Mayilsamy
This can happen if glusterd service is down.

What does service glusterd status say?

If you find this down, start it by service glusterd start

Thanks,
Kanagaraj

- Original Message -
 From: Piotr Kliczewski piotr.kliczew...@gmail.com
 To: devel@ovirt.org
 Sent: Tuesday, July 1, 2014 3:00:29 PM
 Subject: [ovirt-devel]  Test day: gluster install
 
 I stated to test gluster related features and noticed issue after
 installation.
 I performed following steps on my f20 using xmlrpc:
 1. Installed ovirt 3.5 repo.
 2. Installed engine
 3. Installed vdsm on the same host - status UP
 4. Removed vdsm
 5. Enabled gluster service
 6. Installed vdsm again (tried several times with the same result)
 
 Here is the output that I get:
 I can see gluserd and glusterfsd services being active.
 
 Engine:
 2014-07-01 10:38:53,722 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
 Call Stack: null, Custom Event ID: -1, Message: Host fedora's
 following network(s) are not synchronized with their Logical Network
 configuration: ovirtmgmt.
 
 vdsm:
 
 Thread-13::DEBUG::2014-07-01
 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-object',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-plugin',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-account',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-proxy',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-doc',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-container',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
 ('glusterfs-geo-replication',) not found
 
 Thread-13::ERROR::2014-07-01
 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
 occured
 Traceback (most recent call last):
   File /usr/share/vdsm/rpc/BindingXMLRPC.py, line 1110, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 54, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 251, in hostsList
 return {'hosts': self.svdsmProxy.glusterPeerStatus()}
   File /usr/share/vdsm/supervdsm.py, line 50, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 48, in lambda
 **kwargs)
   File string, line 2, in glusterPeerStatus
   File /usr/lib64/python2.7/multiprocessing/managers.py, line 773,
 in _callmethod
 raise convert_to_error(kind, result)
 GlusterCmdExecFailedException: Command execution failed
 error: Connection failed. Please check if gluster daemon is operational.
 
 Can someone help me understand what am I missing or confirm to open a BZ?
 
 Thanks,
 Piotr
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Piotr Kliczewski
On Tue, Jul 1, 2014 at 11:56 AM, Kanagaraj Mayilsamy
kmayi...@redhat.com wrote:
 This can happen if glusterd service is down.

 What does service glusterd status say?

 If you find this down, start it by service glusterd start


I checked status of this service and it was active.


 Thanks,
 Kanagaraj

 - Original Message -
 From: Piotr Kliczewski piotr.kliczew...@gmail.com
 To: devel@ovirt.org
 Sent: Tuesday, July 1, 2014 3:00:29 PM
 Subject: [ovirt-devel]  Test day: gluster install

 I stated to test gluster related features and noticed issue after
 installation.
 I performed following steps on my f20 using xmlrpc:
 1. Installed ovirt 3.5 repo.
 2. Installed engine
 3. Installed vdsm on the same host - status UP
 4. Removed vdsm
 5. Enabled gluster service
 6. Installed vdsm again (tried several times with the same result)

 Here is the output that I get:
 I can see gluserd and glusterfsd services being active.

 Engine:
 2014-07-01 10:38:53,722 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
 Call Stack: null, Custom Event ID: -1, Message: Host fedora's
 following network(s) are not synchronized with their Logical Network
 configuration: ovirtmgmt.

 vdsm:

 Thread-13::DEBUG::2014-07-01
 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-object',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-plugin',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-account',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-proxy',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-doc',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
 ('gluster-swift-container',) not found
 Thread-13::DEBUG::2014-07-01
 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
 ('glusterfs-geo-replication',) not found

 Thread-13::ERROR::2014-07-01
 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
 occured
 Traceback (most recent call last):
   File /usr/share/vdsm/rpc/BindingXMLRPC.py, line 1110, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 54, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 251, in hostsList
 return {'hosts': self.svdsmProxy.glusterPeerStatus()}
   File /usr/share/vdsm/supervdsm.py, line 50, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 48, in lambda
 **kwargs)
   File string, line 2, in glusterPeerStatus
   File /usr/lib64/python2.7/multiprocessing/managers.py, line 773,
 in _callmethod
 raise convert_to_error(kind, result)
 GlusterCmdExecFailedException: Command execution failed
 error: Connection failed. Please check if gluster daemon is operational.

 Can someone help me understand what am I missing or confirm to open a BZ?

 Thanks,
 Piotr
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt-engine-3.5 branch is way too old

2014-07-01 Thread Alon Bar-Lev
Hi,

The following backlog is post branching, as branching was done at random point 
in effort.
As far as I can see all these should go into 3.5 anyway, if someone can do us 
the service and just remove on top of master it will reduce effort of each 
individual developer.
Next time branch should be created after at least one bug day is over and major 
issues found are in.
Beta is a tag in time not a branch in time.

Thanks,
Alon

549d9e6 engine: NetworkValidator uses new validation syntax
9f0310b engine: Clear syntax for writing validations
52c6b35 host-deploy: appropriate message for kdump detection
375c554 core: Use force detach only on Data SD
8f02a74 engine: no need to save vm_static on run once
c6851e4 ui: remove Escape characters for TextBoxLabel
5e37215 ui: improve hot plug cpu wording
028c175 engine: Rename providerId to networkProviderId in add/update host 
actions
5b4d20c engine: Configure unique host name on neutron.conf
90eb1d2 extapi: aaa: add auth result to credential change
994996b backend: Add richer formatting of migration duration
98e293b core: handle fence agent power wait param on stop
bb9ecfb engine: Clear eclipse warning in AddVdsCommand
36dd138 aaa: always use engine context for queries
24f0cf8 restapi: rsdl_metadata - quota.id in add disk
7161ac0 tools: Expose VmGracefulShutdownTimeout option to engine-config
8255f44 aaa: more fixes to command context propgation
b8feb57 restapi: missing vms link under affinity groups
f056835 core, engine: Fix HotPlugCpuSupported config value
4492ef7 core, engine: Avoid migration in ppc64
2710b07 ui: avoid casting warnings on findbugs
bcb156c core: adding missing command constructor
92c1522 core: Changing Host free space threshold
a0d000b webadmin: column sorting support for Disks sub-tabs
5a0c76f webadmin: column sorting support for Storage sub-tabs
14a625e webadmin: column sorting support for Disks tabs
a32d199 core: DiskConditionField - extract verbs to constants
48cc09d core: fixed searching disks by creation date
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt-engine 3.5 branched

2014-07-01 Thread Yedidyah Bar David
Hi all,

ovirt-engine-3.5 was branched from master.

The commit used was 0b16ed7a76d3fbe106e15263211f1a64f075df0c :
core: validation error on edit instance type

This is the same commit used to build the beta build that is used in the test 
day
that we are having today.

Developers: Note that since this commit, new changes were committed to master.
Please cherry-pick/push to 3.5 changes that should be there.

Best regards,
-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt 3.5 Test Day 1 Results

2014-07-01 Thread Martin Perina
Hi,

I tested these features:

  1073453 - OVIRT35 - [RFE] add Debian 7 to the list of operating systems when 
creating a new vm
Info: Debian 7 is listed in OS in new VM dialog
Result: success

  1047624 - OVIRT35 - [RFE] support BIOS boot device menu
Info: Boot menu has to be enabled in Edit VM dialog Boot options tab Enable 
boot menu. Once enabled,
  user can press F12 and select boot device in the same way as in 
standard BIOS
Result: success


During test I found these issues:

  1) Engine installation problem on Centos 6.5
  Package 
ovirt-engine-userportal-3.5.0-0.0.master.20140629172257.git0b16ed7.el6.noarch.rpm
 is not signed
 After disabling GPG signature check in /etc/yum.repos.d/ovirt-3.5.repo, 
installation continues fine.

  2) Engine installation problem on Centos 6.5
 Engine indirectly depends on batik packaged, but xmlgraphics-batik is 
installed instead of it.
 I created a bug [1]

  3) Packages ioprocess and python-ioprocess are not available in oVirt 
repository for 3.5 beta (even they are
 available in master-snapshot-static repository).
 Created a ticket for infra https://fedorahosted.org/ovirt/ticket/205
 


Martin

[1] https://bugzilla.redhat.com/1114921
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt-node-plugin-hosted-engine is missing in 3.5-pre

2014-07-01 Thread Fabian Deutsch
Hey,

I just noted that the packages for ovirt-node-plugin-hosted-engine are missing 
in the 3.5 repos. I'm now on  it to get the in shape.
This also means that the current (to be relased?) ovirt-node-iso rpm is missing 
this plugin as well :-/


- fabian
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Kanagaraj Mayilsamy


- Original Message -
 From: Piotr Kliczewski piotr.kliczew...@gmail.com
 To: Kanagaraj Mayilsamy kmayi...@redhat.com
 Cc: devel@ovirt.org
 Sent: Tuesday, July 1, 2014 3:52:59 PM
 Subject: Re: [ovirt-devel] Test day: gluster install
 
 On Tue, Jul 1, 2014 at 11:56 AM, Kanagaraj Mayilsamy
 kmayi...@redhat.com wrote:
  This can happen if glusterd service is down.
 
  What does service glusterd status say?
 
  If you find this down, start it by service glusterd start
 
 
 I checked status of this service and it was active.\

Whats the output of gluster peer status?


 
 
  Thanks,
  Kanagaraj
 
  - Original Message -
  From: Piotr Kliczewski piotr.kliczew...@gmail.com
  To: devel@ovirt.org
  Sent: Tuesday, July 1, 2014 3:00:29 PM
  Subject: [ovirt-devel]  Test day: gluster install
 
  I stated to test gluster related features and noticed issue after
  installation.
  I performed following steps on my f20 using xmlrpc:
  1. Installed ovirt 3.5 repo.
  2. Installed engine
  3. Installed vdsm on the same host - status UP
  4. Removed vdsm
  5. Enabled gluster service
  6. Installed vdsm again (tried several times with the same result)
 
  Here is the output that I get:
  I can see gluserd and glusterfsd services being active.
 
  Engine:
  2014-07-01 10:38:53,722 WARN
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
  (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
  Call Stack: null, Custom Event ID: -1, Message: Host fedora's
  following network(s) are not synchronized with their Logical Network
  configuration: ovirtmgmt.
 
  vdsm:
 
  Thread-13::DEBUG::2014-07-01
  10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
  ('gluster-swift',) not found
  Thread-13::DEBUG::2014-07-01
  10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
  ('gluster-swift-object',) not found
  Thread-13::DEBUG::2014-07-01
  10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
  ('gluster-swift-plugin',) not found
  Thread-13::DEBUG::2014-07-01
  10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
  ('gluster-swift-account',) not found
  Thread-13::DEBUG::2014-07-01
  10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
  ('gluster-swift-proxy',) not found
  Thread-13::DEBUG::2014-07-01
  10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
  ('gluster-swift-doc',) not found
  Thread-13::DEBUG::2014-07-01
  10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
  ('gluster-swift-container',) not found
  Thread-13::DEBUG::2014-07-01
  10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
  ('glusterfs-geo-replication',) not found
 
  Thread-13::ERROR::2014-07-01
  10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
  occured
  Traceback (most recent call last):
File /usr/share/vdsm/rpc/BindingXMLRPC.py, line 1110, in wrapper
  res = f(*args, **kwargs)
File /usr/share/vdsm/gluster/api.py, line 54, in wrapper
  rv = func(*args, **kwargs)
File /usr/share/vdsm/gluster/api.py, line 251, in hostsList
  return {'hosts': self.svdsmProxy.glusterPeerStatus()}
File /usr/share/vdsm/supervdsm.py, line 50, in __call__
  return callMethod()
File /usr/share/vdsm/supervdsm.py, line 48, in lambda
  **kwargs)
File string, line 2, in glusterPeerStatus
File /usr/lib64/python2.7/multiprocessing/managers.py, line 773,
  in _callmethod
  raise convert_to_error(kind, result)
  GlusterCmdExecFailedException: Command execution failed
  error: Connection failed. Please check if gluster daemon is operational.
 
  Can someone help me understand what am I missing or confirm to open a BZ?
 
  Thanks,
  Piotr
  ___
  Devel mailing list
  Devel@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/devel
 
 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt Node Weekly Meeting Minutes - July 1 2014

2014-07-01 Thread Fabian Deutsch
Minutes:http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.html
Minutes (text): http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.txt
Log:
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.log.html



=
#ovirt: oVirt Node Weekly Meeting
=


Meeting started by fabiand at 13:02:53 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.log.html
.



Meeting summary
---
* Agenda  (fabiand, 13:04:48)
  * Stable Release (3.0.6)  (fabiand, 13:05:01)
  * Next Release (3.1)  (fabiand, 13:05:06)
  * Hosted Engine Plugin  (fabiand, 13:05:11)
  * Other Items  (fabiand, 13:05:15)

* Action Item Review  (fabiand, 13:05:33)
  * Node team once again to do a review sprint  (fabiand, 13:05:49)
  * ~20 patches merged. ISO quite stable  (fabiand, 13:06:51)

* Stable Release (3.0.6)  (fabiand, 13:07:14)
  * 3.06 unlikel,y rather focusing on 3.1  (fabiand, 13:08:12)

* Next release (3.1)  (fabiand, 13:08:17)
  * 3.1 snapshot in the pipe to be published in 3.5-pre repo
packages+iso  (fabiand, 13:09:27)

* Hosted Engine Plugin  (fabiand, 13:12:00)
  * rpms are missing in 3.5-pre repo  (fabiand, 13:17:21)
  * prevents testing of this feature  (fabiand, 13:17:26)
  * ACTION: rbarry to create a job to build
ovirt-node-plugin-hosted-engine  (fabiand, 13:22:05)

* Other Items  (fabiand, 13:23:34)
  * oVirt Virtual Appliance -- is not available for download.  (fabiand,
13:24:02)
  * LINK: https://fedorahosted.org/ovirt/ticket/188   (fabiand,
13:24:21)
  * apuimedo 's persistencen patches  (fabiand, 13:29:41)

Meeting ended at 13:38:43 UTC.




Action Items

* rbarry to create a job to build ovirt-node-plugin-hosted-engine




Action Items, by person
---
* rbarry
  * rbarry to create a job to build ovirt-node-plugin-hosted-engine
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* fabiand (98)
* eedri (10)
* apuimedo (8)
* rbarry (6)
* dcaro (3)
* ovirtbot (2)
* yzaslavs (1)
* Netbulae (1)
* danken (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt 3.5 Test day 1 - vdsm-tool configure libvirt with python code

2014-07-01 Thread Yedidyah Bar David
Hi all,

I was assigned to test [1], which was fixed by [2], which pointed
at [3].

Most things worked as expected.

Issues I noticed:

* the table says that vdsClient with or without '-s' should work against
vdsm with ssl=true or ssl=false. In my tests '-s' worked with true, without
'-s' worked with false, but the other options didn't work.

* the vdsm-tool package does not depend on vdsm, but
'vdsm-tool configure --force' fails without it.

I didn't open bugs on them because they seem insignificant.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1069636
[2] http://gerrit.ovirt.org/27298
[3] http://www.ovirt.org/Configure_libvirt_testing_matrix
-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt 3.5.0 Beta is now available for testing -- Node update

2014-07-01 Thread Fabian Deutsch
- Original Message -
 The oVirt team is pleased to announce that the 3.5.0 Beta is now
 available for testing.
 
 Feel free to join us testing it!
 
 You'll find all needed info for installing it on the release notes page,
 already available on the wiki [1].
 
 A new oVirt Live iso is already available for testing[2] including all
 available updates from CentOS.
 An oVirt Guest Tools iso is now available too[3].
 
 A new oVirt Node build will be available soon as well.

Hey,

a fresh oVirt Node build is also available now:

http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/ovirt-node-iso-3.5.0.ovirt35.20140630.el6.iso

To circumvent some SELinux issues, please append enforcing=0 to the kernel 
commandline when booting the ISO.

The ISO is missing the plugin for Hosted Engine, but we hope to deliver an iso 
which includes this plugin shortly.

Greetings
fabian
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] Test day: help testing hosted engine on ovirt node

2014-07-01 Thread Omer Frenkel
Hi,
I was assigned to test this topic, but i don't see any info on how to start,
looking at the wiki: http://www.ovirt.org/Node_Hosted_Engine there is no info,
nor on the HE how-to wiki: http://www.ovirt.org/Hosted_Engine_Howto

should i build the node myself on a fedora, and then run the hosted engine 
setup as described in the how-to?
what is the expected flow for this, for a user that want to start using ovirt 
with hosted engine and ovirt node?

Thanks,
Omer.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt 3.5 test day results

2014-07-01 Thread Tal Nisan

Hi,
Today I have tested the following features:

1090798[RFE] Admin GUI - Add host uptime information to the 
General tab


1108861[RFE] Support logging of commands parameters

1090808[RFE] Ability to dismiss alerts and events from web-admin portal


*Results:*


_/1090798[RFE] Admin GUI - Add host uptime information to the 
General t/_ab


The host boot time appears on the host general subtab (see attached 
screenshot), note that the boot time showed is in the timezone of the 
host, *perhaps it'll be wise to add a documentation about it as the host 
and the engine might be in two different time zone* (FYI Dima)




/_1108861[RFE] Support logging of commands parameters_/

When changing the log threshold of ovirt to DEBUG, all the commands 
tested included a dump of the parameters, for instance:


2014-07-01 14:45:59,908 INFO 
[org.ovirt.engine.core.bll.AddImageFromScratchCommand] 
(http--0.0.0.0-8080-1) [766feacc] Running command: 
AddImageFromScratchCommand(MasterVmId = 
d2e5e41f-241b-4832-8f85-85382216bfa1, DiskInfo = 
org.ovirt.engine.core.common.businessentities.DiskImage@169bd050, 
ShouldRemainIllegalOnFailedExecution = false, ImageId = 
----, VmSnapshotId = 
27ea8f58-6742-4075-b44f-349b7556177c, DiskAlias = mlip_Disk3, 
DestinationImageId = ----, 
OldLastModifiedValue = null, ImageGroupID = 
----, ImportEntity = false, LeaveLocked 
= false, Description = null, StorageDomainId = 
0355997e-5b39-48ff-92aa-6ffb2d91e526, QuotaId = null, IsInternal = 
false, VdsId = null, StoragePoolId = 
9ada25ba-5156-48a9-a995-08ac9882abc6, ForceDelete = false) internal: 
true. Entities affected :  ID: 0355997e-5b39-48ff-92aa-6ffb2d91e526 
Type: Storage



_/
/__/1090808[RFE] Ability to dismiss alerts and events from web-admin 
portal/_


The alerts tab included an X icon that upon click made the alert 
disappear, the right mouse button context menu included a dismiss menu 
item that did the same and a clear all button that restored all 
dismissed alerts


*Two notes*: (FYI Ravi)
1. The original bug description refers to alerts  events, the dismiss 
option exists via webadmin only for alerts and not for events, was this 
on purpose?
2. Although it was not explained in the bug, the clear all button to my 
understanding is supposed to dismiss all alerts, instead it restores all 
the dismissed alerts and make them reappear, is this the wanted behavior?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] oVirt 3.5 test day 1 results

2014-07-01 Thread Douglas Schilling Landgraf

Hi,

This time I have tested the below RFE:

[RFE] Change the Slot field to Service Profile when cisco_ucs is 
selected as the fencing type.

https://bugzilla.redhat.com/show_bug.cgi?id=1090803

Test Data
===
Running oVirt 3.5 with Power Management enabled in hosts when selecting 
Type cisco_ucs the Slot field get replaced by Service Profile as RFE 
requested. The same test under 3.4 the field is not replaced.

I would say this RFE is 100% accomplished.



--
Cheers
Douglas
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel