Re: [ovirt-users] VM failed to start | Bad volume specification

2015-03-18 Thread Punit Dambiwal
Hi Michal,

Would you mind to let me know the possible messedup things...i will check
and try to resolve itstill i am communicating gluster community to
resolve this issue...

But in the ovirtgluster setup is quite straightso how come it will
be messedup with reboot ?? if it can be messedup with reboot then it seems
not good and stable technology for the production storage

Thanks,
Punit

On Wed, Mar 18, 2015 at 3:51 PM, Michal Skrivanek 
michal.skriva...@redhat.com wrote:


 On Mar 18, 2015, at 03:33 , Punit Dambiwal hypu...@gmail.com wrote:

  Hi,
 
  Is there any one from community can help me to solve this issue...??
 
  Thanks,
  Punit
 
  On Tue, Mar 17, 2015 at 12:52 PM, Punit Dambiwal hypu...@gmail.com
 wrote:
  Hi,
 
  I am facing one strange issue with ovirt/glusterfsstill didn't find
 this issue is related with glusterfs or Ovirt
 
  Ovirt :- 3.5.1
  Glusterfs :- 3.6.1
  Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
  Guest VM :- more then 100
 
  Issue :- When i deploy this cluster first time..it work well for me(all
 the guest VM created and running successfully)but suddenly one day my
 one of the host node rebooted and none of the VM can boot up now...and
 failed with the following error Bad Volume Specification
 
  VMId :- d877313c18d9783ca09b62acf5588048
 
  VDSM Logs :- http://ur1.ca/jxabi

 you've got timeouts while accessing storage…so I guess something got
 messed up on reboot, it may also be just a gluster misconfiguration…

  Engine Logs :- http://ur1.ca/jxabv
 
  
  [root@cpu01 ~]# vdsClient -s 0 getVolumeInfo
 e732a82f-bae9-4368-8b98-dedc1c3814de 0002-0002-0002-0002-0145
 6d123509-6867-45cf-83a2-6d679b77d3c5 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
  status = OK
  domain = e732a82f-bae9-4368-8b98-dedc1c3814de
  capacity = 21474836480
  voltype = LEAF
  description =
  parent = ----
  format = RAW
  image = 6d123509-6867-45cf-83a2-6d679b77d3c5
  uuid = 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
  disktype = 2
  legality = LEGAL
  mtime = 0
  apparentsize = 21474836480
  truesize = 4562972672
  type = SPARSE
  children = []
  pool =
  ctime = 1422676305
  -
 
  I opened same thread earlier but didn't get any perfect answers to solve
 this issue..so i reopen it...
 
  https://www.mail-archive.com/users@ovirt.org/msg25011.html
 
  Thanks,
  Punit
 
 
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt resilience policy / HA

2015-03-18 Thread Omer Frenkel


- Original Message -
 From: Guillaume Penin guilla...@onlineacid.com
 To: users@ovirt.org
 Sent: Monday, March 16, 2015 10:34:54 PM
 Subject: [ovirt-users] Ovirt resilience policy / HA
 
 Hi all,
 
 I'm building a test ovirt (3.5.1) infrastructure, based on 3 ovirt nodes
 and 1 ovirt engine.
 
 Everything runs (almost) fine, but i don't exactly understand the
 interaction between resilience policy (Cluster) and HA (VM).
 
 = What I understand, in case of host failure :
 
 - Setting resilience policy to :
 
  - Migrate Virtual Machines = All VMs (HA and non HA) will be
 started on another host.
  - Migrate only Highly Available Virtual Machines = HA VMs only will
 be started on another host.
  - Do Not Migrate Virtual Machines = HA and non HA VMs won't be
 started on another host.
 
 = In practice :
 
  - No matter what parameter i use in resilience policy, HA VMs only
 will be started on another host in case of a host failure.
 
 Is this the expected behaviour ? Am I misunderstanding the way it works
 ?


there are 2 types of host failure:
1 - power/network/vdsm service.. - these leads to a state where engine and host 
cannot communicate, host will move to 'non-responsive' and will be fenced 
(power mgmt action)
in this case, only HA vms will be restarted, once the engine is sure it is safe.

2 - software issues like storage connectivity - in this case, engine can 
communicate with the host, and decide it is not fulfilling the cluster/dc 
requirements
the host will move to 'not operational' then the engine will live-migrate (not 
restart) vms according to your choice of resilience policy 

so what did you do when you tested 'host failure'?
according to your question i assume number 1, and this is what you got

let me know if it helps
Omer.

 Kind regards,
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to read VM '[Empty Name]' OVF, it may be corrupted

2015-03-18 Thread Tomas Jelinek
Hi Jon,

could you please attach the ovf files here? 
Somewhere in the export domain you should have files with .ovf extension which 
is an XML describing the VM. I'd say they will be corrupted.

Thanx, 
Tomas

- Original Message -
 From: Jon Archer j...@rosslug.org.uk
 To: users@ovirt.org
 Sent: Wednesday, March 18, 2015 12:36:14 AM
 Subject: [ovirt-users] Failed to read VM '[Empty Name]' OVF,  it may be 
 corrupted
 
 Hi all,
 
 seing a strange issue here, I'm currently in the process of migrating
 from one ovirt setup to another and having trouble with the
 export/import process.
 
 The new setup is a 3.5 install with hosted engine and glusterfs the old
 one is running on a nightly release (not too recent)
 
 I have brought up an NFS export on the existing storage on the old
 setup, successfully exported a number of VM's and imported them onto the
 new system.
 
 However I came to move the last 4 VM's and am seeing an issue where
 after attaching the export storage to the new setup I see no VMs in the
 export storage to import and see this in the log:
 2015-03-17 23:30:56,742 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
 (ajp--127.0.0.1-8702-8) START, GetVmsInfoVDSCommand( storagePoolId =
 0002-0002-0002-0002-0209, ignoreFailoverLimit = false,
 storageDomainId = 86f85b1d-a9ef-4106-a4bf-eae19722d28a, vmIdList =
 null), log id: e2a32ac
 2015-03-17 23:30:56,766 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
 (ajp--127.0.0.1-8702-8) FINISH, GetVmsInfoVDSCommand, log id: e2a32ac
 2015-03-17 23:30:56,798 ERROR
 [org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
 Error parsing OVF due to 2
 2015-03-17 23:30:56,818 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
 Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
 corrupted
 2015-03-17 23:30:56,867 ERROR
 [org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
 Error parsing OVF due to 2
 2015-03-17 23:30:56,884 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
 Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
 corrupted
 2015-03-17 23:30:56,905 ERROR
 [org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
 Error parsing OVF due to 2
 2015-03-17 23:30:56,925 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
 Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
 corrupted
 2015-03-17 23:30:56,943 ERROR
 [org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
 Error parsing OVF due to 2
 2015-03-17 23:30:56,992 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
 Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
 corrupted
 2015-03-17 23:30:57,012 ERROR
 [org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
 Error parsing OVF due to 2
 2015-03-17 23:30:57,033 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
 Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
 corrupted
 2015-03-17 23:30:57,071 ERROR
 [org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
 Error parsing OVF due to 2
 2015-03-17 23:30:57,091 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
 Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
 corrupted
 
 
 I've brought up new export storage domains on both the new and old
 cluster (and a seperate storage array for that matter) all resulting the
 same messages.
 
 Anyone any thoughts on these errors?
 
 Thanks
 
 Jon
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ManageIQ and Ovirt

2015-03-18 Thread Itamar Heim

On 03/17/2015 01:09 PM, Michal Skrivanek wrote:


On Mar 14, 2015, at 16:43 , Christian Ehart ehart...@gmail.com wrote:


Hi,

can someone from RD please check the below issue/rca.

ManageIQ tries by performing a Smart State Analys to access a ovf File on Ovirt 
3.5.x for a VM.
I think oVirt changed in 3.5 their behavior by introducing the Disk for 
OVF_STORE, so maybe our below issue is related to it..


Hi,
I'm curious, what's the purpose? It's supposed to be internal, there's an API 
for access(not sure if it is though, if not it should be added:)


its for smart state analysis (aka fleecing) - they mount the storage 
domain itself into the appliance in read only and analyze the VMs).
they should be compatible with 3.5 (though indeed this error message is 
a cause for concern).




Thanks,
michal



more Infos under 
http://talk.manageiq.org/t/no-results-from-smartstate-analysis-in-ovirt-environment/585/15

thx,
Christian


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Running in db and not running in VDS

2015-03-18 Thread Punit Dambiwal
Hi Omer,

Still i am facing the same issue...would you mind to help me here ...

Thanks,
Punit

On Sun, Mar 1, 2015 at 10:52 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi Omer,

 Yes...it's the log for the SPM also ...

 Thanks,
 Punit

 On Sun, Mar 1, 2015 at 10:05 PM, Omer Frenkel ofren...@redhat.com wrote:



 - Original Message -
  From: Punit Dambiwal hypu...@gmail.com
  To: Omer Frenkel ofren...@redhat.com
  Cc: users@ovirt.org, Martin Pavlik mpav...@redhat.com, Martin
 Perina mper...@redhat.com
  Sent: Thursday, February 26, 2015 6:18:36 AM
  Subject: Re: [ovirt-users] Running in db and not running in VDS
 
  Hi Omer,
 
  Please find the attached logs

 looks like some communication issue with the spm,
 is the vdsm.log attached is the log of the spm for that time?
 i could not see the calls there..

 2015-02-26 12:12:17,894 ERROR
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (org.ovirt.thread.pool-8-thread-25) [180029b1]
 IrsBroker::Failed::DeleteImageGroupVDS due to: IRSErrorException:
 IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS,
 error = Connection timed out, code = 100

 
  Vm name :- punit
 
  [image: Inline image 1]
 
  On Wed, Feb 25, 2015 at 5:20 PM, Omer Frenkel ofren...@redhat.com
 wrote:
 
  
  
   - Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: Omer Frenkel ofren...@redhat.com
Cc: users@ovirt.org, Martin Pavlik mpav...@redhat.com, Martin
   Perina mper...@redhat.com
Sent: Tuesday, February 24, 2015 8:24:07 PM
Subject: Re: [ovirt-users] Running in db and not running in VDS
   
Hi Omer,
   
but when i destroy the vm...vm destroyed successfully but it's disk
   doesn't
and when i try to manually remove the disk from ovirt...it's failed
 to
remove also
   
  
   can you please attach engine.log and vdsm.log for the removeVm ?
  
Thanks,
Punit
   
On Tue, Feb 24, 2015 at 4:07 PM, Omer Frenkel ofren...@redhat.com
   wrote:
   


 - Original Message -
  From: Punit Dambiwal hypu...@gmail.com
  To: users@ovirt.org, Martin Pavlik mpav...@redhat.com,
 Martin
 Perina mper...@redhat.com
  Sent: Tuesday, February 24, 2015 6:11:03 AM
  Subject: [ovirt-users] Running in db and not running in VDS
 
  Hi,
 
  VM failed to create, failed to reboot and through the errors :-
 
  2015-02-23 17:27:11,879 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
  (DefaultQuartzScheduler_Worker-64) [1546215a] VM
  8325d21f97888adff6bd6b70bfd6c13b
   (b74d945c-c9f8-4336-a91b-390a11f07650)
 is
  running in db and not running in VDS compute11
 

 this is ok, this means that the engine discovered vm that moved
 to down
 look at the log for the reason it failed.

  and for delete VM :-
 
 
 
  2015-02-23 17:21:44,625 WARN
 
   [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
  (ajp--127.0.0.1-8702-57) [4534b0a2] Correlation ID: 6769,
 Job ID:
  76d135de-ef8d-41dc-9cc0-855134ededce, Call Stack: null, Custom
 Event
   ID:
 -1,
  Message: VM 1feac62b8cd19fcc2ff296957adc8a4a has been removed,
 but
   the
  following disks could not be removed: Disk1. These disks will
 appear
   in
 the
  main disks tab in illegal state, please remove manually when
   possible.
 
 
 
 
  Thanks,
 
  Punit
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

   
  
 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)

2015-03-18 Thread Sandro Bonazzola
Hi,
we still have 5 open blockers for 3.5.2[1]:

Bug ID  Whiteboard  Status  Summary
1161012 infra   POSTtask cleaning utility  should erase 
commands that have running tasks
1187244 network POST[RHEL  7.0 + 7.1] Host configure with 
DHCP is losing connectivity after some time - dhclient is not running
1177220 storage ASSIGNED[BLOCKED] Failed to Delete First 
snapshot with live merge
1196327 virtASSIGNED[performance] bad getVMList output 
creates unnecessary calls from Engine
1202360 virtPOST[performance] bad getVMList output 
creates unnecessary calls from Engine

And 2 dependency on libvirt not yet fixed:
Bug ID  Status  Summary
1199182 POST2nd active commit after snapshot triggers qemu failure
1199036 POSTLibvirtd was restarted when do active blockcommit while 
there is a blockpull job running

ACTION: Assignee to provide ETA for the blocker bug.

Despite the blockers bug count, we're going to build RC2 today 2015-03-18 at 
12:00 UTC for allowing the verification of fixed bugs and testing on
CentOS 7.1.
If you're going to test this release candidate on CentOS please be sure to have 
the CR[2] repository enabled and system fully updated to CentOS 7.1.

We still have 7 bugs in MODIFIED and 31 on QA[3]:

MODIFIEDON_QA   Total
infra   2   10  12
integration 0   2   2
network 0   2   2
node0   1   1
sla 2   1   3
storage 3   11  14
virt0   4   4
Total   7   31  38

ACTION: Testers: you're welcome to verify bugs currently ON_QA.

All remaining bugs not marked as blockers have been moved to 3.5.3.
A release management entry has been added for tracking the schedule of 3.5.3[4]
A bug tracker [5] has been created for 3.5.3.
We have 32 bugs currently targeted to 3.5.3[6]:

Whiteboard  NEW ASSIGNEDPOSTTotal
docs2   0   0   2
external1   0   0   1
gluster 4   0   1   5
infra   2   2   0   4
node2   0   1   3
ppc 0   0   1   1
sla 4   0   0   4
storage 8   0   0   8
ux  1   0   1   2
virt1   0   1   2
Total   25  2   5   32


ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 ensuring 
they're correctly targeted.
ACTION: Maintainers: to fill release notes for 3.5.2, the page has been created 
and updated here [7]
ACTION: Testers: please add yourself to the test page [8]

7 Patches have been merged for 3.5.3 and not backported to 3.5.2 branch 
according to Change-Id

commit 6b5a8169093357656d3e638c7018ee516d1f44bd
Author: Maor Lipchuk mlipc...@redhat.com
Date:   Thu Feb 19 14:40:23 2015 +0200
core: Add validation when Storage Domain is blocked.
Change-Id: I9a7c12609b3780c74396dab6edf26e4deaff490f

commit 7fd4dca0a7fb15d3e9179457f1f2aea6c727d325
Author: Maor Lipchuk mlipc...@redhat.com
Date:   Sun Mar 1 17:17:16 2015 +0200
restapi: reconfigure values on import data Storage Domain.
Change-Id: I2ef7baa850bd6da08ae27d41ebe9e4ad525fbe9b

commit 4283f755e6b77995247ecb9ddd904139bc8c322c
Author: Maor Lipchuk mlipc...@redhat.com
Date:   Tue Mar 10 12:05:05 2015 +0200
restapi: Quering FCP unregistered Storage Domains
Change-Id: Iafe2f2afcd0e6e68ad2054c857388acc30a7

commit a3d8b687620817b38a64a3917f4440274831bca3
Author: Maor Lipchuk mlipc...@redhat.com
Date:   Wed Feb 25 17:00:47 2015 +0200
core: Add fk constraint on vm_interface_statistics
Change-Id: I53cf2737ef91cf967c93990fcb237f6c4e12a8f8

commit c8caaceb6b1678c702961d298b3d6c48183d9390
Author: emesika emes...@redhat.com
Date:   Mon Mar 9 18:01:58 2015 +0200
core: do not use distinct if sort expr have func
Change-Id: I7c036b2b9ee94266b6e3df54f2c50167e454ed6a

commit 4332194e55ad40eee423e8611eceb95fd59dac7e
Author: Vered Volansky vvola...@redhat.com
Date:   Thu Mar 12 17:38:35 2015 +0200
webadmin: Fix punctuation in threshold warnings
Change-Id: If30f094e52f42b78537e215a2699cf74c248bd83

commit 773f2a108ce18e0029f864c8748d7068b71f8ff3
Author: Maor Lipchuk mlipc...@redhat.com
Date:   Sat Feb 28 11:37:26 2015 +0200
core: Add managed devices to OVF
Change-Id: Ie0e912c9b2950f1461ae95f4704f18b818b83a3b

ACTION: Authors please verify they're not meant to be targeted to 3.5.2.


[1] https://bugzilla.redhat.com/1186161
[2] http://mirror.centos.org/centos/7/cr/x86_64/
[3] http://goo.gl/UEVTCf
[4] http://www.ovirt.org/OVirt_3.5.z_Release_Management#oVirt_3.5.3
[5] https://bugzilla.redhat.com/1198142

Re: [ovirt-users] ManageIQ and Ovirt

2015-03-18 Thread Itamar Heim

On 03/18/2015 09:57 AM, Michal Skrivanek wrote:


On Mar 18, 2015, at 08:55 , Itamar Heim ih...@redhat.com wrote:


On 03/17/2015 01:09 PM, Michal Skrivanek wrote:


On Mar 14, 2015, at 16:43 , Christian Ehart ehart...@gmail.com wrote:


Hi,

can someone from RD please check the below issue/rca.

ManageIQ tries by performing a Smart State Analys to access a ovf File on Ovirt 
3.5.x for a VM.
I think oVirt changed in 3.5 their behavior by introducing the Disk for 
OVF_STORE, so maybe our below issue is related to it..


Hi,
I'm curious, what's the purpose? It's supposed to be internal, there's an API 
for access(not sure if it is though, if not it should be added:)


its for smart state analysis (aka fleecing) - they mount the storage domain 
itself into the appliance in read only and analyze the VMs).
they should be compatible with 3.5 (though indeed this error message is a cause 
for concern).


I'm sure it's not supposed to cause any harm…but doesn't it go around any API 
and depend on internal implementation….?


trust me they want to get rid of this more than we do (think about them 
dealing with lvm metadata changing during the scan for block storage). 
they are just waiting for us to provide a streaming disk access api...


hopefully the new upload/download api in 3.6 will solve this.







Thanks,
michal



more Infos under 
http://talk.manageiq.org/t/no-results-from-smartstate-analysis-in-ovirt-environment/585/15

thx,
Christian


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)

2015-03-18 Thread Oved Ourfali

On Mar 18, 2015 9:57 AM, Sandro Bonazzola sbona...@redhat.com wrote:

 Hi, 
 we still have 5 open blockers for 3.5.2[1]: 

 Bug ID Whiteboard Status Summary 
 1161012 infra POST task cleaning utility  should erase commands that have 
 running tasks 

Simone, latest comment by you implies that it is working now with the latest 
patches. All appear in the bugzilla as merged. Should it move to modified? 

 1187244 network POST [RHEL  7.0 + 7.1] Host configure with DHCP is losing 
 connectivity after some time - dhclient is not running 
 1177220 storage ASSIGNED [BLOCKED] Failed to Delete First snapshot with live 
 merge 
 1196327 virt ASSIGNED [performance] bad getVMList output creates unnecessary 
 calls from Engine 
 1202360 virt POST [performance] bad getVMList output creates unnecessary 
 calls from Engine 

 And 2 dependency on libvirt not yet fixed: 
 Bug ID Status Summary 
 1199182 POST 2nd active commit after snapshot triggers qemu failure 
 1199036 POST Libvirtd was restarted when do active blockcommit while there is 
 a blockpull job running 

 ACTION: Assignee to provide ETA for the blocker bug. 

 Despite the blockers bug count, we're going to build RC2 today 2015-03-18 at 
 12:00 UTC for allowing the verification of fixed bugs and testing on 
 CentOS 7.1. 
 If you're going to test this release candidate on CentOS please be sure to 
 have the CR[2] repository enabled and system fully updated to CentOS 7.1. 

 We still have 7 bugs in MODIFIED and 31 on QA[3]: 

 MODIFIED ON_QA Total 
 infra 2 10 12 
 integration 0 2 2 
 network 0 2 2 
 node 0 1 1 
 sla 2 1 3 
 storage 3 11 14 
 virt 0 4 4 
 Total 7 31 38 

 ACTION: Testers: you're welcome to verify bugs currently ON_QA. 

 All remaining bugs not marked as blockers have been moved to 3.5.3. 
 A release management entry has been added for tracking the schedule of 
 3.5.3[4] 
 A bug tracker [5] has been created for 3.5.3. 
 We have 32 bugs currently targeted to 3.5.3[6]: 

 Whiteboard NEW ASSIGNED POST Total 
 docs 2 0 0 2 
 external 1 0 0 1 
 gluster 4 0 1 5 
 infra 2 2 0 4 
 node 2 0 1 3 
 ppc 0 0 1 1 
 sla 4 0 0 4 
 storage 8 0 0 8 
 ux 1 0 1 2 
 virt 1 0 1 2 
 Total 25 2 5 32 


 ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 ensuring 
 they're correctly targeted. 
 ACTION: Maintainers: to fill release notes for 3.5.2, the page has been 
 created and updated here [7] 
 ACTION: Testers: please add yourself to the test page [8] 

 7 Patches have been merged for 3.5.3 and not backported to 3.5.2 branch 
 according to Change-Id 

 commit 6b5a8169093357656d3e638c7018ee516d1f44bd 
 Author: Maor Lipchuk mlipc...@redhat.com 
 Date:   Thu Feb 19 14:40:23 2015 +0200 
     core: Add validation when Storage Domain is blocked. 
     Change-Id: I9a7c12609b3780c74396dab6edf26e4deaff490f 

 commit 7fd4dca0a7fb15d3e9179457f1f2aea6c727d325 
 Author: Maor Lipchuk mlipc...@redhat.com 
 Date:   Sun Mar 1 17:17:16 2015 +0200 
     restapi: reconfigure values on import data Storage Domain. 
     Change-Id: I2ef7baa850bd6da08ae27d41ebe9e4ad525fbe9b 

 commit 4283f755e6b77995247ecb9ddd904139bc8c322c 
 Author: Maor Lipchuk mlipc...@redhat.com 
 Date:   Tue Mar 10 12:05:05 2015 +0200 
     restapi: Quering FCP unregistered Storage Domains 
     Change-Id: Iafe2f2afcd0e6e68ad2054c857388acc30a7 

 commit a3d8b687620817b38a64a3917f4440274831bca3 
 Author: Maor Lipchuk mlipc...@redhat.com 
 Date:   Wed Feb 25 17:00:47 2015 +0200 
     core: Add fk constraint on vm_interface_statistics 
     Change-Id: I53cf2737ef91cf967c93990fcb237f6c4e12a8f8 

 commit c8caaceb6b1678c702961d298b3d6c48183d9390 
 Author: emesika emes...@redhat.com 
 Date:   Mon Mar 9 18:01:58 2015 +0200 
     core: do not use distinct if sort expr have func 
     Change-Id: I7c036b2b9ee94266b6e3df54f2c50167e454ed6a 

 commit 4332194e55ad40eee423e8611eceb95fd59dac7e 
 Author: Vered Volansky vvola...@redhat.com 
 Date:   Thu Mar 12 17:38:35 2015 +0200 
     webadmin: Fix punctuation in threshold warnings 
     Change-Id: If30f094e52f42b78537e215a2699cf74c248bd83 

 commit 773f2a108ce18e0029f864c8748d7068b71f8ff3 
 Author: Maor Lipchuk mlipc...@redhat.com 
 Date:   Sat Feb 28 11:37:26 2015 +0200 
     core: Add managed devices to OVF 
     Change-Id: Ie0e912c9b2950f1461ae95f4704f18b818b83a3b 

 ACTION: Authors please verify they're not meant to be targeted to 3.5.2. 


 [1] https://bugzilla.redhat.com/1186161 
 [2] http://mirror.centos.org/centos/7/cr/x86_64/ 
 [3] http://goo.gl/UEVTCf 
 [4] http://www.ovirt.org/OVirt_3.5.z_Release_Management#oVirt_3.5.3 
 [5] https://bugzilla.redhat.com/1198142 
 [6] 
 https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt%20target_release%3A3.5.3
  
 [7] http://www.ovirt.org/OVirt_3.5.2_Release_Notes 
 [8] http://www.ovirt.org/Testing/oVirt_3.5.2_Testing 

 -- 
 Sandro Bonazzola 
 Better technology. Faster innovation. Powered by community collaboration. 
 See how it works at redhat.com 
 ___ 
 

Re: [ovirt-users] Running in db and not running in VDS

2015-03-18 Thread Punit Dambiwal
Hi Omer,

Whenever i try to remove the VM...in between the remove procedure my
data-store goes unknown and come up active with remove VM failed...

[image: Inline image 1]

I have attached logs here :- http://fpaste.org/199381/

Thanks,
Punit

On Wed, Mar 18, 2015 at 5:24 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi Omer,

 Still i am facing the same issue...would you mind to help me here ...

 Thanks,
 Punit

 On Sun, Mar 1, 2015 at 10:52 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi Omer,

 Yes...it's the log for the SPM also ...

 Thanks,
 Punit

 On Sun, Mar 1, 2015 at 10:05 PM, Omer Frenkel ofren...@redhat.com
 wrote:



 - Original Message -
  From: Punit Dambiwal hypu...@gmail.com
  To: Omer Frenkel ofren...@redhat.com
  Cc: users@ovirt.org, Martin Pavlik mpav...@redhat.com, Martin
 Perina mper...@redhat.com
  Sent: Thursday, February 26, 2015 6:18:36 AM
  Subject: Re: [ovirt-users] Running in db and not running in VDS
 
  Hi Omer,
 
  Please find the attached logs

 looks like some communication issue with the spm,
 is the vdsm.log attached is the log of the spm for that time?
 i could not see the calls there..

 2015-02-26 12:12:17,894 ERROR
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (org.ovirt.thread.pool-8-thread-25) [180029b1]
 IrsBroker::Failed::DeleteImageGroupVDS due to: IRSErrorException:
 IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS,
 error = Connection timed out, code = 100

 
  Vm name :- punit
 
  [image: Inline image 1]
 
  On Wed, Feb 25, 2015 at 5:20 PM, Omer Frenkel ofren...@redhat.com
 wrote:
 
  
  
   - Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: Omer Frenkel ofren...@redhat.com
Cc: users@ovirt.org, Martin Pavlik mpav...@redhat.com, Martin
   Perina mper...@redhat.com
Sent: Tuesday, February 24, 2015 8:24:07 PM
Subject: Re: [ovirt-users] Running in db and not running in VDS
   
Hi Omer,
   
but when i destroy the vm...vm destroyed successfully but it's disk
   doesn't
and when i try to manually remove the disk from ovirt...it's
 failed to
remove also
   
  
   can you please attach engine.log and vdsm.log for the removeVm ?
  
Thanks,
Punit
   
On Tue, Feb 24, 2015 at 4:07 PM, Omer Frenkel ofren...@redhat.com
 
   wrote:
   


 - Original Message -
  From: Punit Dambiwal hypu...@gmail.com
  To: users@ovirt.org, Martin Pavlik mpav...@redhat.com,
 Martin
 Perina mper...@redhat.com
  Sent: Tuesday, February 24, 2015 6:11:03 AM
  Subject: [ovirt-users] Running in db and not running in VDS
 
  Hi,
 
  VM failed to create, failed to reboot and through the errors :-
 
  2015-02-23 17:27:11,879 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
  (DefaultQuartzScheduler_Worker-64) [1546215a] VM
  8325d21f97888adff6bd6b70bfd6c13b
   (b74d945c-c9f8-4336-a91b-390a11f07650)
 is
  running in db and not running in VDS compute11
 

 this is ok, this means that the engine discovered vm that moved
 to down
 look at the log for the reason it failed.

  and for delete VM :-
 
 
 
  2015-02-23 17:21:44,625 WARN
 
  
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
  (ajp--127.0.0.1-8702-57) [4534b0a2] Correlation ID: 6769,
 Job ID:
  76d135de-ef8d-41dc-9cc0-855134ededce, Call Stack: null, Custom
 Event
   ID:
 -1,
  Message: VM 1feac62b8cd19fcc2ff296957adc8a4a has been removed,
 but
   the
  following disks could not be removed: Disk1. These disks will
 appear
   in
 the
  main disks tab in illegal state, please remove manually when
   possible.
 
 
 
 
  Thanks,
 
  Punit
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

   
  
 




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] very long time to detach an export domain

2015-03-18 Thread Eli Mesika
Hi

Please attach relevant VDSM + engine logs

thanks 


- Original Message -
 From: Nathanaël Blanchet blanc...@abes.fr
 To: users@ovirt.org
 Sent: Wednesday, March 18, 2015 12:43:17 PM
 Subject: [ovirt-users] very long time to detach an export domain
 
 Hi all,
 
 I have no latence for attaching an existing v1 export domain to any
 datacenter. However, detaching the same export domain takes a while
 (more than 30 min) saying preparing for mainteance. Is it a regular
 behaviour? If yes, what is it done at this step for being so long?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ManageIQ and Ovirt

2015-03-18 Thread Michal Skrivanek

On Mar 18, 2015, at 08:55 , Itamar Heim ih...@redhat.com wrote:

 On 03/17/2015 01:09 PM, Michal Skrivanek wrote:
 
 On Mar 14, 2015, at 16:43 , Christian Ehart ehart...@gmail.com wrote:
 
 Hi,
 
 can someone from RD please check the below issue/rca.
 
 ManageIQ tries by performing a Smart State Analys to access a ovf File on 
 Ovirt 3.5.x for a VM.
 I think oVirt changed in 3.5 their behavior by introducing the Disk for 
 OVF_STORE, so maybe our below issue is related to it..
 
 Hi,
 I'm curious, what's the purpose? It's supposed to be internal, there's an 
 API for access(not sure if it is though, if not it should be added:)
 
 its for smart state analysis (aka fleecing) - they mount the storage 
 domain itself into the appliance in read only and analyze the VMs).
 they should be compatible with 3.5 (though indeed this error message is a 
 cause for concern).

I'm sure it's not supposed to cause any harm…but doesn't it go around any API 
and depend on internal implementation….?

 
 
 Thanks,
 michal
 
 
 more Infos under 
 http://talk.manageiq.org/t/no-results-from-smartstate-analysis-in-ovirt-environment/585/15
 
 thx,
 Christian
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] very long time to detach an export domain

2015-03-18 Thread Nathanaël Blanchet

Hi all,

I have no latence for attaching an existing v1 export domain to any 
datacenter. However, detaching the same export domain takes a while 
(more than 30 min) saying preparing for mainteance. Is it a regular 
behaviour? If yes, what is it done at this step for being so long?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)

2015-03-18 Thread Francesco Romani


- Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: Eli Mesika emes...@redhat.com, Ido Barkan ibar...@redhat.com, 
 Adam Litke ali...@redhat.com,
 Francesco Romani from...@redhat.com, Maor Lipchuk 
 mlipc...@redhat.com, Vered Volansky
 vvola...@redhat.com, Eli Mesika emes...@redhat.com, Users@ovirt.org, 
 de...@ovirt.org
 Sent: Wednesday, March 18, 2015 8:56:17 AM
 Subject: [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)
 
 Hi,
 we still have 5 open blockers for 3.5.2[1]:
 
 Bug IDWhiteboard  Status  Summary
 1161012   infra   POSTtask cleaning utility  should 
 erase commands that have
 running tasks
 1187244   network POST[RHEL  7.0 + 7.1] Host 
 configure with DHCP is losing
 connectivity after some time - dhclient is not running
 1177220   storage ASSIGNED[BLOCKED] Failed to Delete 
 First snapshot with live
 merge
 1196327   virtASSIGNED[performance] bad getVMList 
 output creates unnecessary
 calls from Engine
 1202360   virtPOST[performance] bad getVMList 
 output creates unnecessary
 calls from Engine

For both virth bugs above we need both these patches: 38679 and 38805 (!)

Circumstances mandate extremely careful verification in master and 3.5 branch.

Verification on master is almost done (very last check pending), verification 
on 3.5 branch is
halfway done with candidate patches.
I'm aiming for merge in master later today and for merge on 3.5 worst case 
tomorrow morning,
with a bit of luck within the day.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine --vm-status results

2015-03-18 Thread Artyom Lukianov
At the moment hosted-engine not support clean remove of hosts from HE 
environment, we have PRD bug for this issue 
https://bugzilla.redhat.com/show_bug.cgi?id=1136009, I you can edit metadata on 
HE storage and remove not active hosts, but it better to ask developers for 
correct way to this stuff.

- Original Message -
From: Filipe Guarino guari...@gmail.com
To: users@ovirt.org
Sent: Monday, March 9, 2015 1:03:03 AM
Subject: [ovirt-users] Hosted-Engine --vm-status results

Hello guys 
I installed ovirt using hosted-engine procedure with six fisical hosts, with 
more than 60 vms, and until now, everythings ok and my environment works fine. 
I decided to use some of my hosts for other tasks, so have been removed four of 
my six hosts and put it way from my environment. 
After few days, my second host (hosted_engine_2) start to fail. It's hardware 
issue. My 10GbE interface stoped. I decide to put my host 4 as a second 
hosted_engine_2. 
It's works fine. but when I use command hosted-engine --vm-status, its still 
returns all of the old members of hosted-engines (1 to 6) 
how can i fix it leave only just active active nodes? 
See below the output for my hosted-engine --vm-status 



[root@bmh0001 ~]# hosted-engine --vm-status 

--== Host 1 status ==-- 

Status up-to-date : True 
Hostname : bmh0001.place.brazil 
Host ID : 1 
Engine status : {reason: vm not running on this host, health: bad, 
vm: down, detail: unknown} 
Score : 2400 
Local maintenance : False 
Host timestamp : 68830 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=68830 (Sun Mar 8 17:38:05 2015) 
host-id=1 
score=2400 
maintenance=False 
state=EngineDown 


--== Host 2 status ==-- 

Status up-to-date : True 
Hostname : bmh0004.place.brazil 
Host ID : 2 
Engine status : {health: good, vm: up, detail: up} 
Score : 2400 
Local maintenance : False 
Host timestamp : 2427 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=2427 (Sun Mar 8 17:38:09 2015) 
host-id=2 
score=2400 
maintenance=False 
state=EngineUp 


--== Host 3 status ==-- 

Status up-to-date : False 
Hostname : bmh0003.place.brazil 
Host ID : 3 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 331389 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=331389 (Tue Mar 3 14:48:25 2015) 
host-id=3 
score=0 
maintenance=True 
state=LocalMaintenance 


--== Host 4 status ==-- 

Status up-to-date : False 
Hostname : bmh0004.place.brazil 
Host ID : 4 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 364358 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=364358 (Tue Mar 3 16:10:36 2015) 
host-id=4 
score=0 
maintenance=True 
state=LocalMaintenance 


--== Host 5 status ==-- 

Status up-to-date : False 
Hostname : bmh0005.place.brazil 
Host ID : 5 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 241930 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=241930 (Fri Mar 6 09:40:31 2015) 
host-id=5 
score=0 
maintenance=True 
state=LocalMaintenance 


--== Host 6 status ==-- 

Status up-to-date : False 
Hostname : bmh0006.place.brazil 
Host ID : 6 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 77376 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=77376 (Wed Mar 4 09:11:17 2015) 
host-id=6 
score=0 
maintenance=True 
state=LocalMaintenance 
[root@bmh0001 ~]# hosted-engine --vm-status 


--== Host 1 status ==-- 

Status up-to-date : True 
Hostname : bmh0001.place.brazil 
Host ID : 1 
Engine status : {reason: bad vm status, health: bad, vm: down, 
detail: down} 
Score : 2400 
Local maintenance : False 
Host timestamp : 68122 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=68122 (Sun Mar 8 17:26:16 2015) 
host-id=1 
score=2400 
maintenance=False 
state=EngineStarting 


--== Host 2 status ==-- 

Status up-to-date : True 
Hostname : bmh0004.place.brazil 
Host ID : 2 
Engine status : {reason: bad vm status, health: bad, vm: up, 
detail: powering up} 
Score : 2400 
Local maintenance : False 
Host timestamp : 1719 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=1719 (Sun Mar 8 17:26:21 2015) 
host-id=2 
score=2400 
maintenance=False 
state=EngineStarting 


--== Host 3 status ==-- 

Status up-to-date : False 
Hostname : bmh0003.place.brazil 
Host ID : 3 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 331389 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=331389 (Tue Mar 3 14:48:25 2015) 
host-id=3 
score=0 

Re: [ovirt-users] Power Management config on Ovirt

2015-03-18 Thread Renchu Mathew
Hi Martin,

My setup meets all those requirements and I can able to migrate the VM from one 
host to another manually. Once network cable is pulled off from one of the 
server other server is also shuts down.

Regards

Renchu Mathew  |  Sr. IT Administrator



CRACKNELL  DUBAI   |  P.O. Box 66231  |   United Arab Emirates  |  T +971 4 
3445417  |  F +971 4 3493675 |  M +971 50 7386484 
ABU DHABI | DUBAI | LONDON | MUSCAT | DOHA | JEDDAH
EMAIL ren...@cracknell.com | WEB www.cracknell.com

This email, its content and any files transmitted with it are intended solely 
for the addressee(s) and may be legally privileged and/or confidential. If you 
are not the intended recipient please let us know by email reply and delete it 
from the system. Please note that any views or opinions presented in this email 
do not necessarily represent those of the company. Email transmissions cannot 
be guaranteed to be secure or error-free as information could be intercepted, 
corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The 
company therefore does not accept liability for any errors or omissions in the 
contents of this message which arise as a result of email transmission.


-Original Message-
From: Martin Perina [mailto:mper...@redhat.com] 
Sent: Tuesday, March 17, 2015 8:31 PM
To: Renchu Mathew
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Power Management config on Ovirt

Hi,

prior to the test I would check this:

  - Data Center status is Up
  - All hosts status is Up
  - All storage domains status is Up
  - VM is running

If this is valid, you can start your fence testing. But bear in mind what I 
sent you in previous email: at least one host in DC should be fully functional 
to be able to fence non responsive host.

Martin Perina

- Original Message -
 From: Renchu Mathew ren...@cracknell.com
 To: Martin Perina mper...@redhat.com
 Cc: users@ovirt.org
 Sent: Tuesday, March 17, 2015 5:03:53 PM
 Subject: RE: [ovirt-users] Power Management config on Ovirt
 
 Hi Martin
 
 Yes, my test VM still running on this storage. Is it possible to do 
 remote session and check this?
 
 Regards
 
 Renchu Mathew
 
 
 -Original Message-
 From: Martin Perina [mailto:mper...@redhat.com]
 Sent: Tuesday, March 17, 2015 7:30 PM
 To: Renchu Mathew
 Cc: users@ovirt.org
 Subject: Re: [ovirt-users] Power Management config on Ovirt
 
 Hi,
 
 this is what happened (at least what I was able to read from log):
 
 18:18:02 - host node02 changed status to Connecting
 
   - that's OK, but prior to this I can see many errors
 from ConnectStorage*Commands. Are you sure that your storage
 was ok prior to fencing test?
 
 18:18:51 - host node02 changed status to Non Responsive
 
   - that's OK, non responding treatment started with SSH Soft Fencing
 
 
 18:18:55 - SSH Soft Fencing failed, because of no route to host node02
 
   - that's OK
 
 
 18:18:56 - power management stop was executed using node01 as fence 
 proxy
 
   - problem started here, communication with node01 was timed out, we were
 not able to find any other suitable fence proxy
 
 18:21:57 - host node01 changed status to Connection
 
 
 From this point there's is nothing that can be done, because engine 
 cannot communicate with any host, so it cannot fix anything.
 
 When you are doing those tests you need  at least one functional host, 
 otherwise you are not able to execute any fence action.
 
 
 Martin Perina
 
 
 - Original Message -
  From: Renchu Mathew ren...@cracknell.com
  To: Martin Perina mper...@redhat.com
  Cc: users@ovirt.org
  Sent: Tuesday, March 17, 2015 3:53:24 PM
  Subject: RE: [ovirt-users] Power Management config on Ovirt
  
  Hi Martin,
  
  Please find attached the log files.
  
  Regards
  
  Renchu Mathew
  
  -Original Message-
  From: Martin Perina [mailto:mper...@redhat.com]
  Sent: Tuesday, March 17, 2015 6:40 PM
  To: Renchu Mathew
  Cc: users@ovirt.org
  Subject: Re: [ovirt-users] Power Management config on Ovirt
  
  Hi,
  
  please attach new logs, so we can investigate what has happened.
  
  Thanks
  
  Martin Perina
  
  - Original Message -
   From: Renchu Mathew ren...@cracknell.com
   To: Martin Perina mper...@redhat.com
   Cc: Piotr Kliczewski piotr.kliczew...@gmail.com, 
   users@ovirt.org
   Sent: Tuesday, March 17, 2015 3:34:11 PM
   Subject: RE: [ovirt-users] Power Management config on Ovirt
   
   Hi Martin,
   
   I did the same as below. But is still does the same. Node01 is 
   shutdown. Do you think RHEV + gluster storage will not have this 
   issue. What storage do you recommend?
   
   Regards
   
   Renchu Mathew
   
   -Original Message-
   From: Martin Perina [mailto:mper...@redhat.com]
   Sent: Tuesday, March 17, 2015 5:35 PM
   To: Renchu Mathew
   Cc: Piotr Kliczewski; users@ovirt.org
   Subject: Re: [ovirt-users] Power Management config on Ovirt
   
   Hi,
   
   I don't know much about gluster configuration, but one of the 
   

Re: [ovirt-users] [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)

2015-03-18 Thread Maor Lipchuk
Hi Sandro,

Regarding my patches which have been merged for 3.5.3 and not backported to 
3.5.2 branch, it looks fine, they should not be backported since most of them 
are targeted to RHEV 3.5.2


Thanks,
Maor




- Forwarded Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: Eli Mesika emes...@redhat.com, Ido Barkan ibar...@redhat.com, 
 Adam Litke ali...@redhat.com,
 Francesco Romani from...@redhat.com, Maor Lipchuk 
 mlipc...@redhat.com, Vered Volansky
 vvola...@redhat.com, Eli Mesika emes...@redhat.com, Users@ovirt.org, 
 de...@ovirt.org
 Sent: Wednesday, March 18, 2015 9:56:17 AM
 Subject: [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)
 
 Hi,
 we still have 5 open blockers for 3.5.2[1]:
 
 Bug IDWhiteboard  Status  Summary
 1161012   infra   POSTtask cleaning utility  should 
 erase commands that have
 running tasks
 1187244   network POST[RHEL  7.0 + 7.1] Host 
 configure with DHCP is losing
 connectivity after some time - dhclient is not running
 1177220   storage ASSIGNED[BLOCKED] Failed to Delete 
 First snapshot with live
 merge
 1196327   virtASSIGNED[performance] bad getVMList 
 output creates unnecessary
 calls from Engine
 1202360   virtPOST[performance] bad getVMList 
 output creates unnecessary
 calls from Engine
 
 And 2 dependency on libvirt not yet fixed:
 Bug IDStatus  Summary
 1199182   POST2nd active commit after snapshot triggers qemu 
 failure
 1199036   POSTLibvirtd was restarted when do active 
 blockcommit while there
 is a blockpull job running
 
 ACTION: Assignee to provide ETA for the blocker bug.
 
 Despite the blockers bug count, we're going to build RC2 today 2015-03-18 at
 12:00 UTC for allowing the verification of fixed bugs and testing on
 CentOS 7.1.
 If you're going to test this release candidate on CentOS please be sure to
 have the CR[2] repository enabled and system fully updated to CentOS 7.1.
 
 We still have 7 bugs in MODIFIED and 31 on QA[3]:
 
   MODIFIEDON_QA   Total
 infra 2   10  12
 integration   0   2   2
 network   0   2   2
 node  0   1   1
 sla   2   1   3
 storage   3   11  14
 virt  0   4   4
 Total 7   31  38
 
 ACTION: Testers: you're welcome to verify bugs currently ON_QA.
 
 All remaining bugs not marked as blockers have been moved to 3.5.3.
 A release management entry has been added for tracking the schedule of
 3.5.3[4]
 A bug tracker [5] has been created for 3.5.3.
 We have 32 bugs currently targeted to 3.5.3[6]:
 
 WhiteboardNEW ASSIGNEDPOSTTotal
 docs  2   0   0   2
 external  1   0   0   1
 gluster   4   0   1   5
 infra 2   2   0   4
 node  2   0   1   3
 ppc   0   0   1   1
 sla   4   0   0   4
 storage   8   0   0   8
 ux1   0   1   2
 virt  1   0   1   2
 Total 25  2   5   32
 
 
 ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 ensuring
 they're correctly targeted.
 ACTION: Maintainers: to fill release notes for 3.5.2, the page has been
 created and updated here [7]
 ACTION: Testers: please add yourself to the test page [8]
 
 7 Patches have been merged for 3.5.3 and not backported to 3.5.2 branch
 according to Change-Id
 
 commit 6b5a8169093357656d3e638c7018ee516d1f44bd
 Author: Maor Lipchuk mlipc...@redhat.com
 Date:   Thu Feb 19 14:40:23 2015 +0200
 core: Add validation when Storage Domain is blocked.
 Change-Id: I9a7c12609b3780c74396dab6edf26e4deaff490f


https://bugzilla.redhat.com/1195032 - should not be backported, since the 
target is rhev 3.5.2 so oVirt 3.5.3 should be fine

 
 commit 7fd4dca0a7fb15d3e9179457f1f2aea6c727d325
 Author: Maor Lipchuk mlipc...@redhat.com
 Date:   Sun Mar 1 17:17:16 2015 +0200
 restapi: reconfigure values on import data Storage Domain.
 Change-Id: I2ef7baa850bd6da08ae27d41ebe9e4ad525fbe9b

https://bugzilla.redhat.com/1195724 - should not be backported, since the 
target is oVirt 3.5.3


 
 commit 4283f755e6b77995247ecb9ddd904139bc8c322c
 Author: Maor Lipchuk mlipc...@redhat.com
 Date:   Tue Mar 10 12:05:05 2015 +0200
 restapi: Quering FCP unregistered Storage Domains
 Change-Id: Iafe2f2afcd0e6e68ad2054c857388acc30a7

https://bugzilla.redhat.com/1201158 - should not be backported, since the 
target is rhev 3.5.2 so oVirt 3.5.3 should be fine


 
 commit a3d8b687620817b38a64a3917f4440274831bca3
 

Re: [ovirt-users] VMs freezing during heals

2015-03-18 Thread Pranith Kumar Karampuri

hi,
  Are you using thin-lvm based backend on which the bricks are created?

Pranith
On 03/18/2015 02:05 AM, Alastair Neil wrote:
I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There are 
two virtualisation clusters one with two nehelem nodes and one with 
 four  sandybridge nodes. My master storage domain is a GlusterFS 
backed by a replica 3 gluster volume from 3 of the gluster nodes.  The 
engine is a hosted engine 3.5.1 on 3 of the sandybridge nodes, with 
storage broviede by nfs from a different gluster volume.  All the 
hosts are CentOS 6.6.


 vdsm-4.16.10-8.gitc937927.el6
glusterfs-3.6.2-1.el6
2.6.32 - 504.8.1.el6.x86_64


Problems happen when I try to add a new brick or replace a brick 
eventually the self heal will kill the VMs. In the VM's logs I see 
kernel hung task messages.


Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for
more than 120 seconds.
Mar 12 23:05:16 static1 kernel:  Not tainted
2.6.32-504.3.3.el6.x86_64 #1
Mar 12 23:05:16 static1 kernel: echo 0 
/proc/sys/kernel/hung_task_timeout_secs disables this message.
Mar 12 23:05:16 static1 kernel: nginx D 0001  
  0  1736   1735 0x0080

Mar 12 23:05:16 static1 kernel: 8800778b17a8 0082
 000126c0
Mar 12 23:05:16 static1 kernel: 88007e5c6500 880037170080
0006ce5c85bd9185 88007e5c64d0
Mar 12 23:05:16 static1 kernel: 88007a614ae0 0001722b64ba
88007a615098 8800778b1fd8
Mar 12 23:05:16 static1 kernel: Call Trace:
Mar 12 23:05:16 static1 kernel: [8152a885]
schedule_timeout+0x215/0x2e0
Mar 12 23:05:16 static1 kernel: [8152a503]
wait_for_common+0x123/0x180
Mar 12 23:05:16 static1 kernel: [81064b90] ?
default_wake_function+0x0/0x20
Mar 12 23:05:16 static1 kernel: [a0210a76] ?
_xfs_buf_read+0x46/0x60 [xfs]
Mar 12 23:05:16 static1 kernel: [a02063c7] ?
xfs_trans_read_buf+0x197/0x410 [xfs]
Mar 12 23:05:16 static1 kernel: [8152a61d]
wait_for_completion+0x1d/0x20
Mar 12 23:05:16 static1 kernel: [a020ff5b]
xfs_buf_iowait+0x9b/0x100 [xfs]
Mar 12 23:05:16 static1 kernel: [a02063c7] ?
xfs_trans_read_buf+0x197/0x410 [xfs]
Mar 12 23:05:16 static1 kernel: [a0210a76]
_xfs_buf_read+0x46/0x60 [xfs]
Mar 12 23:05:16 static1 kernel: [a0210b3b]
xfs_buf_read+0xab/0x100 [xfs]
Mar 12 23:05:16 static1 kernel: [a02063c7]
xfs_trans_read_buf+0x197/0x410 [xfs]
Mar 12 23:05:16 static1 kernel: [a01ee6a4]
xfs_imap_to_bp+0x54/0x130 [xfs]
Mar 12 23:05:16 static1 kernel: [a01f077b]
xfs_iread+0x7b/0x1b0 [xfs]
Mar 12 23:05:16 static1 kernel: [811ab77e] ?
inode_init_always+0x11e/0x1c0
Mar 12 23:05:16 static1 kernel: [a01eb5ee]
xfs_iget+0x27e/0x6e0 [xfs]
Mar 12 23:05:16 static1 kernel: [a01eae1d] ?
xfs_iunlock+0x5d/0xd0 [xfs]
Mar 12 23:05:16 static1 kernel: [a0209366]
xfs_lookup+0xc6/0x110 [xfs]
Mar 12 23:05:16 static1 kernel: [a0216024]
xfs_vn_lookup+0x54/0xa0 [xfs]
Mar 12 23:05:16 static1 kernel: [8119dc65]
do_lookup+0x1a5/0x230
Mar 12 23:05:16 static1 kernel: [8119e8f4]
__link_path_walk+0x7a4/0x1000
Mar 12 23:05:16 static1 kernel: [811738e7] ?
cache_grow+0x217/0x320
Mar 12 23:05:16 static1 kernel: [8119f40a]
path_walk+0x6a/0xe0
Mar 12 23:05:16 static1 kernel: [8119f61b]
filename_lookup+0x6b/0xc0
Mar 12 23:05:16 static1 kernel: [811a0747]
user_path_at+0x57/0xa0
Mar 12 23:05:16 static1 kernel: [a0204e74] ?
_xfs_trans_commit+0x214/0x2a0 [xfs]
Mar 12 23:05:16 static1 kernel: [a01eae3e] ?
xfs_iunlock+0x7e/0xd0 [xfs]
Mar 12 23:05:16 static1 kernel: [81193bc0]
vfs_fstatat+0x50/0xa0
Mar 12 23:05:16 static1 kernel: [811aaf5d] ?
touch_atime+0x14d/0x1a0
Mar 12 23:05:16 static1 kernel: [81193d3b]
vfs_stat+0x1b/0x20
Mar 12 23:05:16 static1 kernel: [81193d64]
sys_newstat+0x24/0x50
Mar 12 23:05:16 static1 kernel: [810e5c87] ?
audit_syscall_entry+0x1d7/0x200
Mar 12 23:05:16 static1 kernel: [810e5a7e] ?
__audit_syscall_exit+0x25e/0x290
Mar 12 23:05:16 static1 kernel: [8100b072]
system_call_fastpath+0x16/0x1b



I am wondering if my volume settings are causing this.  Can anyone 
with more knowledge take a look and let me know:


network.remote-dio: on
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
nfs.export-volumes: on
network.ping-timeout: 20
cluster.self-heal-readdir-size: 64KB
cluster.quorum-type: auto
cluster.data-self-heal-algorithm: diff

Re: [ovirt-users] Reinstalling a new oVirt Manager

2015-03-18 Thread VONDRA Alain
Do you think that I can import the SDs in my new Data Center without any risk 
to destroy the VMs inside them ??
Thank you for your advice.







Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr




-Message d'origine-
De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la part de 
VONDRA Alain
Envoyé : mardi 17 mars 2015 23:33
À : Simone Tiraboschi
Cc : users@ovirt.org
Objet : Re: [ovirt-users] Reinstalling a new oVirt Manager

Ok great, I can see the different storage that I want to import, but I can't 
see the VMs on the un-connected pool.
So anyway I have to go further and import the storage domain passing the alert 
message.
Am I right ?
No other way to see the VMs in SD :

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainsList 
7e40772a-fe94-4fb2-94c4-6198bed04a6a
0fec0486-7863-49bc-a4ab-d2c7ac48258a
d7b9d7cc-f7d6-43c7-ae13-e720951657c9
1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
f9a0076d-e041-4d8f-a627-223562b84b90
04d87d6c-092a-4568-bdca-1d6e8f231912

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo 
7e40772a-fe94-4fb2-94c4-6198bed04a6a
uuid = 7e40772a-fe94-4fb2-94c4-6198bed04a6a
vguuid = zwpINw-zgIR-oOjP-6voS-iA2n-zFRB-G76sSQ
state = OK
version = 3
role = Regular
type = ISCSI
class = Data
pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
name = VOL-UNC-PROD-01

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo 
0fec0486-7863-49bc-a4ab-d2c7ac48258a
uuid = 0fec0486-7863-49bc-a4ab-d2c7ac48258a
vguuid = ta2ZFC-UwtO-xANp-ZK19-rWVo-1kBJ-ZrsQrN
state = OK
version = 3
role = Regular
type = ISCSI
class = Data
pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
name = VOL-UNC-TEST

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo 
d7b9d7cc-f7d6-43c7-ae13-e720951657c9
uuid = d7b9d7cc-f7d6-43c7-ae13-e720951657c9
vguuid = zyiwLa-STgp-v3r8-KJqj-1AIP-rC2k-4NuXqf
state = OK
version = 3
role = Regular
type = ISCSI
class = Data
pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
name = VOL-UNC-PROD-02

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo 
1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
uuid = 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
vguuid = 6itdyW-WLid-fTu6-HAJd-dpSj-HpbH-T4ZSVA
state = OK
version = 3
role = Regular
type = ISCSI
class = Data
pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
name = VOL-UNC-NAS-01

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo 
f9a0076d-e041-4d8f-a627-223562b84b90
uuid = f9a0076d-e041-4d8f-a627-223562b84b90
version = 0
role = Regular
remotePath = unc-srv-kman:/var/lib/exports/iso
type = NFS
class = Iso
pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
name = ISO_DOMAIN

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo 
04d87d6c-092a-4568-bdca-1d6e8f231912
uuid = 04d87d6c-092a-4568-bdca-1d6e8f231912
version = 3
role = Master
remotePath = unc-srv-hyp1.cfu.local:/exports/import_domain
type = NFS
class = Data
pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
name = VOL-NFS

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList 
c58a44b1-1c98-450e-97e1-3347eeb28f86 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
No VMs found.

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList 
f422de63-8869-41ef-a782-8b0c9ee03c41 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
Unknown pool id, pool not connected: ('f422de63-8869-41ef-a782-8b0c9ee03c41',)

[root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList 
f422de63-8869-41ef-a782-8b0c9ee03c41 7e40772a-fe94-4fb2-94c4-6198bed04a6a
Unknown pool id, pool not connected: ('f422de63-8869-41ef-a782-8b0c9ee03c41',)







 Is there any command to verify if the storage domain contains VMs or not ?

You can execute this on one of your hosts.

# vdsClient -s 0 getStorageDomainsList
to get storage domain list, than

# vdsClient -s 0 getStorageDomainInfo domain UUID for each of them till you 
identify the domain you are looking for.
You can also find there the pool UUID.

Than
# vdsClient -s 0 getVmsList pool UUID domain UUID To get the list of VMs on 
that storage.



 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information Direction Administrative et 
Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr




 -Message d'origine-
 De : VONDRA Alain
 Envoyé : mardi 17 mars 2015 16:41
 À : VONDRA Alain; Simone Tiraboschi
 Cc : users@ovirt.org
 Objet : 

Re: [ovirt-users] DWH Question

2015-03-18 Thread Koen Vanoppen
Thanks!!
Only, I can't execute the query.. I added it to the reports as a SQL query,
but I can't execute it... I never added a new one before... So maybe that
will be the problem... :-)

2015-03-16 13:29 GMT+01:00 Shirly Radco sra...@redhat.com:

 Hi Koen,

 I believe you can use this query:

 SELECT v3_5_latest_configuration_hosts_interfaces.vlan_id,
 v3_5_latest_configuration_hosts.host_id,
 v3_5_statistics_vms_resources_usage_samples.vm_id
 FROM v3_5_latest_configuration_hosts_interfaces
 LEFT JOIN v3_5_latest_configuration_hosts ON
 v3_5_latest_configuration_hosts_interfaces.host_id =
 v3_5_latest_configuration_hosts.host_id
 LEFT JOIN v3_5_statistics_vms_resources_usage_samples ON
 v3_5_latest_configuration_hosts.history_id =

 v3_5_statistics_vms_resources_usage_samples.current_host_configuration_version
 LEFT JOIN v3_5_latest_configuration_vms ON
 v3_5_latest_configuration_vms.history_id =
 v3_5_statistics_vms_resources_usage_samples.vm_configuration_version
 LEFT JOIN v3_5_latest_configuration_vms_interfaces ON
 v3_5_latest_configuration_vms.history_id =
 v3_5_latest_configuration_vms_interfaces.vm_configuration_version
 WHERE v3_5_statistics_vms_resources_usage_samples.vm_status = 1
 AND v3_5_latest_configuration_hosts_interfaces.vlan_id IS NOT NULL
 AND v3_5_latest_configuration_vms_interfaces.logical_network_name =
 v3_5_latest_configuration_hosts_interfaces.logical_network_name
 GROUP BY v3_5_latest_configuration_hosts_interfaces.vlan_id,
 v3_5_latest_configuration_hosts.host_id,
 v3_5_statistics_vms_resources_usage_samples.vm_id
 ORDER BY v3_5_latest_configuration_hosts_interfaces.vlan_id,
 v3_5_latest_configuration_hosts.host_id,
 v3_5_statistics_vms_resources_usage_samples.vm_id

 If you need more details please let me know.

 Best regards,
 ---
 Shirly Radco
 BI Software Engineer
 Red Hat Israel Ltd.


 - Original Message -
  From: Koen Vanoppen vanoppen.k...@gmail.com
  To: users@ovirt.org
  Sent: Friday, March 13, 2015 9:17:29 AM
  Subject: [ovirt-users] DWH Question
 
  Dear all,
 
  Is it possible to pull a list of all VMS who are in vlanX?
 
  Kind regards,
 
  Koen
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Reinstalling a new oVirt Manager

2015-03-18 Thread VONDRA Alain

 Do you think that I can import the SDs in my new Data Center without
 any risk to destroy the VMs inside them ??
 Thank you for your advice.

 If no other engine is managing them, you can safely import the existing 
 storage domain.
 As usual, is always a good idea to keep a backup.

There is no other engine anymore, when you say to keep a backup, you talk about 
the engine or ???
What I am afraid is about the Data in the iSCSI SAN, I can't make a backup of 
this.




 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr




 -Message d'origine-
 De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la
 part de VONDRA Alain Envoyé : mardi 17 mars 2015 23:33 À : Simone
 Tiraboschi Cc : users@ovirt.org Objet : Re: [ovirt-users] Reinstalling
 a new oVirt Manager

 Ok great, I can see the different storage that I want to import, but I
 can't see the VMs on the un-connected pool.
 So anyway I have to go further and import the storage domain passing
 the alert message.
 Am I right ?
 No other way to see the VMs in SD :

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainsList
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 f9a0076d-e041-4d8f-a627-223562b84b90
 04d87d6c-092a-4568-bdca-1d6e8f231912

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 uuid = 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 vguuid = zwpINw-zgIR-oOjP-6voS-iA2n-zFRB-G76sSQ
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
 name = VOL-UNC-PROD-01

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 uuid = 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 vguuid = ta2ZFC-UwtO-xANp-ZK19-rWVo-1kBJ-ZrsQrN
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-UNC-TEST

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 uuid = d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 vguuid = zyiwLa-STgp-v3r8-KJqj-1AIP-rC2k-4NuXqf
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
 name = VOL-UNC-PROD-02

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 uuid = 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 vguuid = 6itdyW-WLid-fTu6-HAJd-dpSj-HpbH-T4ZSVA
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-UNC-NAS-01

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 f9a0076d-e041-4d8f-a627-223562b84b90
 uuid = f9a0076d-e041-4d8f-a627-223562b84b90
 version = 0
 role = Regular
 remotePath = unc-srv-kman:/var/lib/exports/iso
 type = NFS
 class = Iso
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = ISO_DOMAIN

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 04d87d6c-092a-4568-bdca-1d6e8f231912
 uuid = 04d87d6c-092a-4568-bdca-1d6e8f231912
 version = 3
 role = Master
 remotePath = unc-srv-hyp1.cfu.local:/exports/import_domain
 type = NFS
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-NFS

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 c58a44b1-1c98-450e-97e1-3347eeb28f86
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 No VMs found.

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 f422de63-8869-41ef-a782-8b0c9ee03c41
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 Unknown pool id, pool not connected:
 ('f422de63-8869-41ef-a782-8b0c9ee03c41',)

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 f422de63-8869-41ef-a782-8b0c9ee03c41
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 Unknown pool id, pool not connected:
 ('f422de63-8869-41ef-a782-8b0c9ee03c41',)






 
  Is there any command to verify if the storage domain contains VMs or not ?

 You can execute this on one of your hosts.

 # vdsClient -s 0 getStorageDomainsList to get storage domain list,
 than

 # vdsClient -s 0 getStorageDomainInfo domain UUID for each of them
 till you identify the domain you are looking for.
 You can also find there the pool UUID.

 Than
 # 

Re: [ovirt-users] Power Management config on Ovirt

2015-03-18 Thread Eli Mesika


- Original Message -
 From: Renchu Mathew ren...@cracknell.com
 To: Martin Perina mper...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, March 18, 2015 2:24:46 PM
 Subject: Re: [ovirt-users] Power Management config on Ovirt
 
 Hi Martin,
 
 My setup meets all those requirements and I can able to migrate the VM from
 one host to another manually. Once network cable is pulled off from one of
 the server other server is also shuts down.

Hi
Sorry for jumping in late , Yesterday was an election day in ISRAEL...

If the other server shuts down when you plug-off the first one and you have 
only 2 hosts then no fencing will take place since there is no available proxy 
host to perform the operation

 
 Regards
 
 Renchu Mathew  |  Sr. IT Administrator
 
 
 
 CRACKNELL  DUBAI   |  P.O. Box 66231  |   United Arab Emirates  |  T +971 4
 3445417  |  F +971 4 3493675 |  M +971 50 7386484
 ABU DHABI | DUBAI | LONDON | MUSCAT | DOHA | JEDDAH
 EMAIL ren...@cracknell.com | WEB www.cracknell.com
 
 This email, its content and any files transmitted with it are intended solely
 for the addressee(s) and may be legally privileged and/or confidential. If
 you are not the intended recipient please let us know by email reply and
 delete it from the system. Please note that any views or opinions presented
 in this email do not necessarily represent those of the company. Email
 transmissions cannot be guaranteed to be secure or error-free as information
 could be intercepted, corrupted, lost, destroyed, arrive late or incomplete,
 or contain viruses. The company therefore does not accept liability for any
 errors or omissions in the contents of this message which arise as a result
 of email transmission.
 
 
 -Original Message-
 From: Martin Perina [mailto:mper...@redhat.com]
 Sent: Tuesday, March 17, 2015 8:31 PM
 To: Renchu Mathew
 Cc: users@ovirt.org
 Subject: Re: [ovirt-users] Power Management config on Ovirt
 
 Hi,
 
 prior to the test I would check this:
 
   - Data Center status is Up
   - All hosts status is Up
   - All storage domains status is Up
   - VM is running
 
 If this is valid, you can start your fence testing. But bear in mind what I
 sent you in previous email: at least one host in DC should be fully
 functional to be able to fence non responsive host.
 
 Martin Perina
 
 - Original Message -
  From: Renchu Mathew ren...@cracknell.com
  To: Martin Perina mper...@redhat.com
  Cc: users@ovirt.org
  Sent: Tuesday, March 17, 2015 5:03:53 PM
  Subject: RE: [ovirt-users] Power Management config on Ovirt
  
  Hi Martin
  
  Yes, my test VM still running on this storage. Is it possible to do
  remote session and check this?
  
  Regards
  
  Renchu Mathew
  
  
  -Original Message-
  From: Martin Perina [mailto:mper...@redhat.com]
  Sent: Tuesday, March 17, 2015 7:30 PM
  To: Renchu Mathew
  Cc: users@ovirt.org
  Subject: Re: [ovirt-users] Power Management config on Ovirt
  
  Hi,
  
  this is what happened (at least what I was able to read from log):
  
  18:18:02 - host node02 changed status to Connecting
  
- that's OK, but prior to this I can see many errors
  from ConnectStorage*Commands. Are you sure that your storage
  was ok prior to fencing test?
  
  18:18:51 - host node02 changed status to Non Responsive
  
- that's OK, non responding treatment started with SSH Soft Fencing
  
  
  18:18:55 - SSH Soft Fencing failed, because of no route to host node02
  
- that's OK
  
  
  18:18:56 - power management stop was executed using node01 as fence
  proxy
  
- problem started here, communication with node01 was timed out, we were
  not able to find any other suitable fence proxy
  
  18:21:57 - host node01 changed status to Connection
  
  
  From this point there's is nothing that can be done, because engine
  cannot communicate with any host, so it cannot fix anything.
  
  When you are doing those tests you need  at least one functional host,
  otherwise you are not able to execute any fence action.
  
  
  Martin Perina
  
  
  - Original Message -
   From: Renchu Mathew ren...@cracknell.com
   To: Martin Perina mper...@redhat.com
   Cc: users@ovirt.org
   Sent: Tuesday, March 17, 2015 3:53:24 PM
   Subject: RE: [ovirt-users] Power Management config on Ovirt
   
   Hi Martin,
   
   Please find attached the log files.
   
   Regards
   
   Renchu Mathew
   
   -Original Message-
   From: Martin Perina [mailto:mper...@redhat.com]
   Sent: Tuesday, March 17, 2015 6:40 PM
   To: Renchu Mathew
   Cc: users@ovirt.org
   Subject: Re: [ovirt-users] Power Management config on Ovirt
   
   Hi,
   
   please attach new logs, so we can investigate what has happened.
   
   Thanks
   
   Martin Perina
   
   - Original Message -
From: Renchu Mathew ren...@cracknell.com
To: Martin Perina mper...@redhat.com
Cc: Piotr Kliczewski piotr.kliczew...@gmail.com,
users@ovirt.org
Sent: Tuesday, March 17, 2015 3:34:11 PM

Re: [ovirt-users] Power Management config on Ovirt

2015-03-18 Thread Eli Mesika


- Original Message -
 From: Renchu Mathew ren...@cracknell.com
 To: Eli Mesika emes...@redhat.com
 Cc: Martin Perina mper...@redhat.com, users@ovirt.org
 Sent: Wednesday, March 18, 2015 3:15:40 PM
 Subject: RE: [ovirt-users] Power Management config on Ovirt
 
 Hi Eli,
 
 Those 2 hosts are connected with Fujitsu iRMC management port and power
 management is configured with ipmi. So it can use this connection to fence
 the other node, is it correct?

No, keep in mind that the one that communicates with the proxy host is the 
oVirt engine, so , if it is not accessable, oVirt engine can not use it 


 
 Regards
 
 Renchu Mathew
 
 -Original Message-
 From: Eli Mesika [mailto:emes...@redhat.com]
 Sent: Wednesday, March 18, 2015 4:31 PM
 To: Renchu Mathew
 Cc: Martin Perina; users@ovirt.org
 Subject: Re: [ovirt-users] Power Management config on Ovirt
 
 
 
 - Original Message -
  From: Renchu Mathew ren...@cracknell.com
  To: Martin Perina mper...@redhat.com
  Cc: users@ovirt.org
  Sent: Wednesday, March 18, 2015 2:24:46 PM
  Subject: Re: [ovirt-users] Power Management config on Ovirt
  
  Hi Martin,
  
  My setup meets all those requirements and I can able to migrate the VM
  from one host to another manually. Once network cable is pulled off
  from one of the server other server is also shuts down.
 
 Hi
 Sorry for jumping in late , Yesterday was an election day in ISRAEL...
 
 If the other server shuts down when you plug-off the first one and you have
 only 2 hosts then no fencing will take place since there is no available
 proxy host to perform the operation
 
  
  Regards
  
  Renchu Mathew  |  Sr. IT Administrator
  
  
  
  CRACKNELL  DUBAI   |  P.O. Box 66231  |   United Arab Emirates  |  T
  +971 4
  3445417  |  F +971 4 3493675 |  M +971 50 7386484 ABU DHABI | DUBAI |
  LONDON | MUSCAT | DOHA | JEDDAH EMAIL ren...@cracknell.com | WEB
  www.cracknell.com
  
  This email, its content and any files transmitted with it are intended
  solely for the addressee(s) and may be legally privileged and/or
  confidential. If you are not the intended recipient please let us know
  by email reply and delete it from the system. Please note that any
  views or opinions presented in this email do not necessarily represent
  those of the company. Email transmissions cannot be guaranteed to be
  secure or error-free as information could be intercepted, corrupted,
  lost, destroyed, arrive late or incomplete, or contain viruses. The
  company therefore does not accept liability for any errors or
  omissions in the contents of this message which arise as a result of email
  transmission.
  
  
  -Original Message-
  From: Martin Perina [mailto:mper...@redhat.com]
  Sent: Tuesday, March 17, 2015 8:31 PM
  To: Renchu Mathew
  Cc: users@ovirt.org
  Subject: Re: [ovirt-users] Power Management config on Ovirt
  
  Hi,
  
  prior to the test I would check this:
  
- Data Center status is Up
- All hosts status is Up
- All storage domains status is Up
- VM is running
  
  If this is valid, you can start your fence testing. But bear in mind
  what I sent you in previous email: at least one host in DC should be
  fully functional to be able to fence non responsive host.
  
  Martin Perina
  
  - Original Message -
   From: Renchu Mathew ren...@cracknell.com
   To: Martin Perina mper...@redhat.com
   Cc: users@ovirt.org
   Sent: Tuesday, March 17, 2015 5:03:53 PM
   Subject: RE: [ovirt-users] Power Management config on Ovirt
   
   Hi Martin
   
   Yes, my test VM still running on this storage. Is it possible to do
   remote session and check this?
   
   Regards
   
   Renchu Mathew
   
   
   -Original Message-
   From: Martin Perina [mailto:mper...@redhat.com]
   Sent: Tuesday, March 17, 2015 7:30 PM
   To: Renchu Mathew
   Cc: users@ovirt.org
   Subject: Re: [ovirt-users] Power Management config on Ovirt
   
   Hi,
   
   this is what happened (at least what I was able to read from log):
   
   18:18:02 - host node02 changed status to Connecting
   
 - that's OK, but prior to this I can see many errors
   from ConnectStorage*Commands. Are you sure that your storage
   was ok prior to fencing test?
   
   18:18:51 - host node02 changed status to Non Responsive
   
 - that's OK, non responding treatment started with SSH Soft
   Fencing
   
   
   18:18:55 - SSH Soft Fencing failed, because of no route to host
   node02
   
 - that's OK
   
   
   18:18:56 - power management stop was executed using node01 as fence
   proxy
   
 - problem started here, communication with node01 was timed out, we
 were
   not able to find any other suitable fence proxy
   
   18:21:57 - host node01 changed status to Connection
   
   
   From this point there's is nothing that can be done, because engine
   cannot communicate with any host, so it cannot fix anything.
   
   When you are doing those tests you need  at least one functional
   host, 

Re: [ovirt-users] Ovirt resilience policy / HA

2015-03-18 Thread Guillaume Penin

Hi Darell,

Sorry for my late reply.

I've been able to test the 2 different scenarios :

- Host not responding = Host fenced = HA VMs restarted on another 
Host.
- Host not operational = Host not fenced, resilience policy configured 
to Migrate Virtual Machines = All VMs migrated to another Host.


Thank you very much for your answer.

Kind regards,

Le 2015-03-17 14:59, Darrell Budic a écrit :

Resilience policy refers to migration behavior only. if VDSM on a host
node detects a storage or network problem, for instance, it will
migrate All, HA, or no VMs to a new node.

Sounds like you’re thinking in terms of “I want Ovirt to restart these
VMs if the host dies”, for that, set HA on the VMs you want it to
restart if the VM dies for whatever reason.


On Mar 16, 2015, at 3:34 PM, Guillaume Penin 
guilla...@onlineacid.com wrote:


Hi all,

I'm building a test ovirt (3.5.1) infrastructure, based on 3 ovirt 
nodes and 1 ovirt engine.


Everything runs (almost) fine, but i don't exactly understand the 
interaction between resilience policy (Cluster) and HA (VM).


= What I understand, in case of host failure :

- Setting resilience policy to :

   - Migrate Virtual Machines = All VMs (HA and non HA) will be 
started on another host.
   - Migrate only Highly Available Virtual Machines = HA VMs only 
will be started on another host.
   - Do Not Migrate Virtual Machines = HA and non HA VMs won't be 
started on another host.


= In practice :

   - No matter what parameter i use in resilience policy, HA VMs only 
will be started on another host in case of a host failure.


Is this the expected behaviour ? Am I misunderstanding the way it 
works ?


Kind regards,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Reinstalling a new oVirt Manager

2015-03-18 Thread Simone Tiraboschi


- Original Message -
 From: VONDRA Alain avon...@unicef.fr
 To: VONDRA Alain avon...@unicef.fr, Simone Tiraboschi 
 stira...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, March 18, 2015 12:42:10 PM
 Subject: RE: [ovirt-users] Reinstalling a new oVirt Manager
 
 Do you think that I can import the SDs in my new Data Center without any risk
 to destroy the VMs inside them ??
 Thank you for your advice.

If no other engine is managing them, you can safely import the existing storage 
domain.
As usual, is always a good idea to keep a backup.

 
 
 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information
 Direction Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr
 
 
 
 
 -Message d'origine-
 De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la part de
 VONDRA Alain
 Envoyé : mardi 17 mars 2015 23:33
 À : Simone Tiraboschi
 Cc : users@ovirt.org
 Objet : Re: [ovirt-users] Reinstalling a new oVirt Manager
 
 Ok great, I can see the different storage that I want to import, but I can't
 see the VMs on the un-connected pool.
 So anyway I have to go further and import the storage domain passing the
 alert message.
 Am I right ?
 No other way to see the VMs in SD :
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainsList
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 f9a0076d-e041-4d8f-a627-223562b84b90
 04d87d6c-092a-4568-bdca-1d6e8f231912
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 uuid = 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 vguuid = zwpINw-zgIR-oOjP-6voS-iA2n-zFRB-G76sSQ
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
 name = VOL-UNC-PROD-01
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 uuid = 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 vguuid = ta2ZFC-UwtO-xANp-ZK19-rWVo-1kBJ-ZrsQrN
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-UNC-TEST
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 uuid = d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 vguuid = zyiwLa-STgp-v3r8-KJqj-1AIP-rC2k-4NuXqf
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
 name = VOL-UNC-PROD-02
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 uuid = 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 vguuid = 6itdyW-WLid-fTu6-HAJd-dpSj-HpbH-T4ZSVA
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-UNC-NAS-01
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 f9a0076d-e041-4d8f-a627-223562b84b90
 uuid = f9a0076d-e041-4d8f-a627-223562b84b90
 version = 0
 role = Regular
 remotePath = unc-srv-kman:/var/lib/exports/iso
 type = NFS
 class = Iso
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = ISO_DOMAIN
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 04d87d6c-092a-4568-bdca-1d6e8f231912
 uuid = 04d87d6c-092a-4568-bdca-1d6e8f231912
 version = 3
 role = Master
 remotePath = unc-srv-hyp1.cfu.local:/exports/import_domain
 type = NFS
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-NFS
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 c58a44b1-1c98-450e-97e1-3347eeb28f86 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 No VMs found.
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 f422de63-8869-41ef-a782-8b0c9ee03c41 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 Unknown pool id, pool not connected:
 ('f422de63-8869-41ef-a782-8b0c9ee03c41',)
 
 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 f422de63-8869-41ef-a782-8b0c9ee03c41 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 Unknown pool id, pool not connected:
 ('f422de63-8869-41ef-a782-8b0c9ee03c41',)
 
 
 
 
 
 
 
  Is there any command to verify if the storage domain contains VMs or not ?
 
 You can execute this on one of your hosts.
 
 # vdsClient -s 0 getStorageDomainsList
 to get storage domain list, than
 
 # vdsClient -s 0 getStorageDomainInfo domain UUID for each of them till you
 identify the domain you are looking for.
 You can also find there the pool UUID.
 
 Than
 # vdsClient -s 0 getVmsList pool UUID domain UUID To get the list of VMs
 on that 

[ovirt-users] oVirt Weekly Sync Meeting: March 18, 2015

2015-03-18 Thread Brian Proffitt
Minutes: http://ovirt.org/meetings/ovirt/2015/ovirt.2015-03-18-14.01.html
Minutes (text): http://ovirt.org/meetings/ovirt/2015/ovirt.2015-03-18-14.01.txt
Log: http://ovirt.org/meetings/ovirt/2015/ovirt.2015-03-18-14.01.log.html  

Meeting summary
---
* Agenda and Roll Call  (bkp, 14:02:30)
  * infra update  (bkp, 14:02:30)
  * 3.5.z updates  (bkp, 14:02:30)
  * 3.6 status  (bkp, 14:02:30)
  * Conferences and Workshops  (bkp, 14:02:30)
  * other topics  (bkp, 14:02:32)

* infra update  (bkp, 14:05:00)
  * infra update Upstream is mostly stable afaik, other than some gerrit
issues dcaro is looking into  (bkp, 14:13:33)
  * infra update We're waiting for a memory upgrade due to hosts in phx
that will allow us to add more slaves to Jenkins  (bkp, 14:13:36)
  * infra update There are a few slaves out of the pool currently
upstream, waiting to be checked up, but no updates  (bkp, 14:13:39)
  * infra update Some duplicated gerrit accounts were also fixed  (bkp,
14:13:42)

* 3.5.z updates  (bkp, 14:13:50)
  * 3.5.z updates Full status at
http://lists.ovirt.org/pipermail/users/2015-March/031881.html  (bkp,
14:18:13)
  * 3.5.z updates 3.5.2 RC2 build just completed, testing in progress.
Released today, if tests passed  (bkp, 14:18:16)
  * 3.5.z updates Some blockers open, some of them should be fixed by
end of week  (bkp, 14:18:19)
  * 3.5.z updates RC2 won't become GA due to existing blockers  but at
least we can test fixed bugs  (bkp, 14:18:22)
  * 3.5.z updates Another RC will likely be available next week  (bkp,
14:18:25)
  * 3.5.z updates We'll need to test it on Centos 7.1, too  (bkp,
14:18:29)

* 3.6 status  (bkp, 14:18:29)
  * 3.6 status integration Full status at:
http://lists.ovirt.org/pipermail/users/2015-March/031878.html  (bkp,
14:22:06)
  * 3.6 status integration Bug count on rise, no blockers yet  (bkp,
14:22:09)
  * 3.6 status integration There is about one month before feature
submission is closed  (bkp, 14:22:12)
  * 3.6 status integration We had some progress with hosted engine
features, moving the conf to shared storage and provisioning
additional hosts by UX. But nothing testable yet  (bkp, 14:22:15)
  * status UX Not many updates, we are making GOOD progress on the
tooltips, stuff is being merged, hopefully soon we can mark it
complete.  (bkp, 14:24:16)
  * 3.6 status network No important milestones this week, nor last week.
Progress proceeding apace.  (bkp, 14:26:49)
  * 3.6 status network Some testing started on our host networking API
feature even though it's not even merged! Some issues discovered.
(bkp, 14:26:52)
  * 3.6 status network And the SR-IOV feature is going well, should
start being reviewed/merged soon enough.  (bkp, 14:26:55)
  * 3.6 status Gluster No updates this week.  (bkp, 14:29:38)
  * 3.6 status storage Good progress all round, starting to see first
drops of features to QA stakeholders (draft builds)  (bkp, 14:32:01)
  * 3.6 status virt  (bkp, 14:34:53)
  * 3.6 status virt No updates this week  (bkp, 14:35:10)
  * 3.6 status Node No updates this week  (bkp, 14:37:18)
  * 3.6 status SLA No updates this week  (bkp, 14:40:37)

* conferences and workshops  (bkp, 14:40:52)
  * conferences and workshops FOSSAsia went very well. Talks were
well-received, and we had great attendance at the oVirt workshop.
(bkp, 14:41:27)
  * conferences and workshops James Jiang from Cloud-Times came in from
Beijing and spoke on VDI. He outlined use cases for their commercial
version of oVirt that deploy up to 10,000 VMs!  (bkp, 14:41:30)
  * conferences and workshops Reminder: KVM Forum registration is now
open
http://events.linuxfoundation.org/events/kvm-forum/attend/register
(bkp, 14:41:34)
  * conferences and workshops KVM Forum CfP is open, too, at:
http://events.linuxfoundation.org/events/kvm-forum/program/cfp
(bkp, 14:41:37)
  * conferences and workshops Please note, again: there *will* be an
official oVirt track in KVM Forum this year that will serve as the
oVirt Workshop, so keep that in mind when submitting proposals.
(bkp, 14:41:40)
  * conferences and workshops CfP is open for 10th Workshop on
Virtualization in High-Performance Cloud Computing (VHPC '15), in
conjunction with Euro-Par 2015, August 24-28, Vienna, Austria  (bkp,
14:41:44)
  * conferences and workshops oss2015 in Florence, Italy is coming up on
May 16-17: we'll have an oVirt session on May 16 at 17:00  (bkp,
14:41:48)

* Other Topics  (bkp, 14:42:37)
  * ACTION: Everyone make a note of the sync time next week: it's 1000
EDT.  (bkp, 14:45:23)

Meeting ended at 14:45:46 UTC.




Action Items

* Everyone make a note of the sync time next week: it's 1000 EDT.




Action Items, by person
---
* **UNASSIGNED**
  * Everyone make a note of the sync time next week: it's 1000 EDT.




People Present (lines said)

Re: [ovirt-users] Reinstalling a new oVirt Manager

2015-03-18 Thread VONDRA Alain
What do you think about it Simone ?
Thanks



 Do you think that I can import the SDs in my new Data Center without
 any risk to destroy the VMs inside them ??
 Thank you for your advice.

 If no other engine is managing them, you can safely import the existing 
 storage domain.
 As usual, is always a good idea to keep a backup.

There is no other engine anymore, when you say to keep a backup, you talk about 
the engine or the SAN storage ???
What I am afraid is about the Data in the iSCSI SAN, I can't make a backup of 
this.




 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information Direction Administrative et 
Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr




 -Message d'origine-
 De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la
 part de VONDRA Alain Envoyé : mardi 17 mars 2015 23:33 À : Simone
 Tiraboschi Cc : users@ovirt.org Objet : Re: [ovirt-users] Reinstalling
 a new oVirt Manager

 Ok great, I can see the different storage that I want to import, but I
 can't see the VMs on the un-connected pool.
 So anyway I have to go further and import the storage domain passing
 the alert message.
 Am I right ?
 No other way to see the VMs in SD :

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainsList
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 f9a0076d-e041-4d8f-a627-223562b84b90
 04d87d6c-092a-4568-bdca-1d6e8f231912

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 uuid = 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 vguuid = zwpINw-zgIR-oOjP-6voS-iA2n-zFRB-G76sSQ
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
 name = VOL-UNC-PROD-01

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 uuid = 0fec0486-7863-49bc-a4ab-d2c7ac48258a
 vguuid = ta2ZFC-UwtO-xANp-ZK19-rWVo-1kBJ-ZrsQrN
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-UNC-TEST

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 uuid = d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 vguuid = zyiwLa-STgp-v3r8-KJqj-1AIP-rC2k-4NuXqf
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
 name = VOL-UNC-PROD-02

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 uuid = 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 vguuid = 6itdyW-WLid-fTu6-HAJd-dpSj-HpbH-T4ZSVA
 state = OK
 version = 3
 role = Regular
 type = ISCSI
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-UNC-NAS-01

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 f9a0076d-e041-4d8f-a627-223562b84b90
 uuid = f9a0076d-e041-4d8f-a627-223562b84b90
 version = 0
 role = Regular
 remotePath = unc-srv-kman:/var/lib/exports/iso
 type = NFS
 class = Iso
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = ISO_DOMAIN

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
 04d87d6c-092a-4568-bdca-1d6e8f231912
 uuid = 04d87d6c-092a-4568-bdca-1d6e8f231912
 version = 3
 role = Master
 remotePath = unc-srv-hyp1.cfu.local:/exports/import_domain
 type = NFS
 class = Data
 pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
 name = VOL-NFS

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 c58a44b1-1c98-450e-97e1-3347eeb28f86
 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
 No VMs found.

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 f422de63-8869-41ef-a782-8b0c9ee03c41
 d7b9d7cc-f7d6-43c7-ae13-e720951657c9
 Unknown pool id, pool not connected:
 ('f422de63-8869-41ef-a782-8b0c9ee03c41',)

 [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getVmsList
 f422de63-8869-41ef-a782-8b0c9ee03c41
 7e40772a-fe94-4fb2-94c4-6198bed04a6a
 Unknown pool id, pool not connected:
 ('f422de63-8869-41ef-a782-8b0c9ee03c41',)






 
  Is there any command to verify if the storage domain contains VMs or not ?

 You can execute this on one of your hosts.

 # 

[ovirt-users] bonding 802.3ad mode

2015-03-18 Thread Nathanaël Blanchet

Hi all,

I'm used to create a mode 4 bond0 interface with two 1 Gb/s interfaces 
on all my hosts, and ethtool bond0 gives me a functionnal 2000Mb/s. 
However, when importing a vm from the export domain (NFS with a speed of 
4GB/s), I always have this alert:
Host siple has network interface which exceeded the defined threshold 
[95%] (em3: transmit rate[0%], receive rate [100%])

It seems that the second nic never works while the first one is overloaded.
Is it an expected behaviour? I believed that the flow was balanced 
between the two interfaces in 802.3ad mode.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 3.5.2 Second Release Candidate is now available for testing

2015-03-18 Thread Mike
Hi Everyone

I have a two node hosted engine cluster that's been running for a month
or two. 

NFS is used for the VM's shared off the nodes on a second network
interface with different hostnames, I hope easier to migrate later on.
NFS 172.16.67.0/24 ov1-nfs.domain.dom on .1 and ov2-nfs.domain.dom
on .2. The NFS shares are working.

Management net is 10.10.10.224/28

Last night the cluster had communication errors, but I could not find
any issues, all nodes can ping  ssh with each other and engine.

Today, it got worse, the engine migrated all but 3 VM's to OV2, the node
with the engine. The VMs still on OV1 are there because the migration
for those failed. I manually can't migrate anything back to ov1. I
eventually shut down the engine and started on OV1, but still no joy.

The VMs are alive, both on OV1  OV2. OV2 is currently in local
maintenance to stop the engine moving and stop the email alerts.

I have been through the logs, I see be a cert issue in libvirtd.log on
the receiving host?

Any help appreciated.
Mike


[root@ov1 ~]#  libvirtd.log
2015-03-18 15:42:17.387+: 3017: error : 
virNetTLSContextValidCertificate:1008 : Unable to verify TLS peer: The peer did 
not send any certificate.

2015-03-18 15:42:17.387+: 3017: warning : 
virNetTLSContextCheckCertificate:1142 : Certificate check failed Unable to 
verify TLS peer: The peer did not send any certificate.

2015-03-18 15:42:17.387+: 3017: error : 
virNetTLSContextCheckCertificate:1145 : authentication failed: Failed to verify 
peer's certificate

[root@ov2 ~]#  vdsm.log
Thread-49490::DEBUG::2015-03-18 
15:42:17,294::migration::298::vm.Vm::(_startUnderlyingMigration) 
vmId=`b44b2182-f943-4987-8421-8a98fd2a04d4`::starting migration to 
qemu+tls://ov1.domain.dom/system with miguri tcp://10.10.10.227

Thread-49525::DEBUG::2015-03-18 15:42:17,296::migration::361::vm.Vm::(run) 
vmId=`b44b2182-f943-4987-8421-8a98fd2a04d4`::migration downtime thread started

Thread-49526::DEBUG::2015-03-18 
15:42:17,297::migration::410::vm.Vm::(monitor_migration) 
vmId=`b44b2182-f943-4987-8421-8a98fd2a04d4`::starting migration monitor thread

Thread-49490::DEBUG::2015-03-18 
15:42:17,388::libvirtconnection::143::root::(wrapper) Unknown libvirterror: 
ecode: 9 edom: 10 level: 2 message: operation failed: Failed to connect to 
remote libvirt URI qemu+tls://ov1.domain.dom/system

Thread-49490::DEBUG::2015-03-18 15:42:17,390::migration::376::vm.Vm::(cancel) 
vmId=`b44b2182-f943-4987-8421-8a98fd2a04d4`::canceling migration downtime thread

Thread-49525::DEBUG::2015-03-18 15:42:17,391::migration::373::vm.Vm::(run) 
vmId=`b44b2182-f943-4987-8421-8a98fd2a04d4`::migration downtime thread exiting

Thread-49490::DEBUG::2015-03-18 15:42:17,391::migration::470::vm.Vm::(stop) 
vmId=`b44b2182-f943-4987-8421-8a98fd2a04d4`::stopping migration monitor thread

Thread-49490::ERROR::2015-03-18 15:42:17,393::migration::161::vm.Vm::(_recover) 
vmId=`b44b2182-f943-4987-8421-8a98fd2a04d4`::operation failed: Failed to 
connect to remote libvirt URI qemu+tls://ov1.domain.dom/system


[root@ov1 ~]# cat /var/log/vdsm/vdsm.log|grep MY_VM
Thread-7589263::DEBUG::2015-03-18 
15:22:01,936::BindingXMLRPC::1133::vds::(wrapper) client [10.10.10.228]::call 
vmMigrationCreate with ({'status': 'Up', 'acpiEnable': 'true', 
'emulatedMachine': 'rhel6.5.0', 'afterMigrationStatus': '', 'tabletEnable': 
'true', 'vmId': 'b44b2182-f943-4987-8421-8a98fd2a04d4', 'memGuaranteedSize': 
2048, 'transparentHugePages': 'true', 'displayPort': '5929', 
'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 
'SandyBridge', 'smp': '2', 'migrationDest': 'libvirt', 'custom': {}, 'vmType': 
'kvm', '_srcDomXML': domain type='kvm' id='58'\n  nameMY_VM/name\n  
uuidb44b2182-f943-4987-8421-8a98fd2a04d4/uuid\n  memory 
unit='KiB'2097152/memory\n  currentMemory 
unit='KiB'2097152/currentMemory\n  vcpu placement='static' 
current='2'16/vcpu\n  cputune\nshares1020/shares\n  /cputune\n  
sysinfo type='smbios'\nsystem\n  entry 
name='manufacturer'oVirt/entry\n  entry name='product'oVirt Node
 /entry\n  entry name='version'6-6.el6.centos.12.2/entry\n  entry 
name='serial'3637-3434-5A43-3234-313130484A52/entry\n  entry 
name='uuid'b44b2182-f943-4987-8421-8a98fd2a04d4/entry\n/system\n  
/sysinfo\n  os\ntype arch='x86_64' machine='rhel6.5.0'hvm/type\n
smbios mode='sysinfo'/\n  /os\n  features\nacpi/\n  /features\n  
cpu mode='custom' match='exact'\nmodel 
fallback='allow'SandyBridge/model\ntopology sockets='16' cores='1' 
threads='1'/\n  /cpu\n  clock offset='variable' adjustment='0' 
basis='utc'\ntimer name='rtc' tickpolicy='catchup'/\ntimer 
name='pit' tickpolicy='delay'/\ntimer name='hpet' present='no'/\n  
/clock\n  on_poweroffdestroy/on_poweroff\n  
on_rebootrestart/on_reboot\n  on_crashdestroy/on_crash\n  devices\n   
 emulator/usr/libexec/qemu-kvm/emulator\ndisk type='file' 
device='disk' snapshot='no'\n  driver name='qemu' type='raw' 

Re: [ovirt-users] Reinstalling a new oVirt Manager

2015-03-18 Thread VONDRA Alain
So I have imported the SD and no VMs are seen...
What can I do ?
I'm sure there are many VMs in this SD, why can I reach them ?
I've put the SD in Maintenance, now.
Help 


 What do you think about it Simone ?
 Thanks



  Do you think that I can import the SDs in my new Data Center without
  any risk to destroy the VMs inside them ??
  Thank you for your advice.

  If no other engine is managing them, you can safely import the
  existing storage domain.
  As usual, is always a good idea to keep a backup.

 There is no other engine anymore, when you say to keep a backup, you
 talk about the engine or the SAN storage ???

If you are afraid to loose your VM images I talking about back-upping the 
storage.

 What I am afraid is about the Data in the iSCSI SAN, I can't make a
 backup of this.

A lot of SAN provides backup or snapshot capabilities.
Otherwise, once mounted, an iSCSI device is just a block device so, assuming 
that you have enough free space somewhere, at least you can still dump it.

 
 
  Alain VONDRA
  Chargé d'exploitation des Systèmes d'Information Direction
  Administrative et Financière
  +33 1 44 39 77 76
  UNICEF France
  3 rue Duguay Trouin  75006 PARIS
  www.unicef.fr
 
 
 
 


 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr






 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr




  -Message d'origine-
  De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la
  part de VONDRA Alain Envoyé : mardi 17 mars 2015 23:33 À : Simone
  Tiraboschi Cc : users@ovirt.org Objet : Re: [ovirt-users]
  Reinstalling a new oVirt Manager
 
  Ok great, I can see the different storage that I want to import, but
  I can't see the VMs on the un-connected pool.
  So anyway I have to go further and import the storage domain passing
  the alert message.
  Am I right ?
  No other way to see the VMs in SD :
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainsList
  7e40772a-fe94-4fb2-94c4-6198bed04a6a
  0fec0486-7863-49bc-a4ab-d2c7ac48258a
  d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  f9a0076d-e041-4d8f-a627-223562b84b90
  04d87d6c-092a-4568-bdca-1d6e8f231912
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  7e40772a-fe94-4fb2-94c4-6198bed04a6a
  uuid = 7e40772a-fe94-4fb2-94c4-6198bed04a6a
  vguuid = zwpINw-zgIR-oOjP-6voS-iA2n-zFRB-G76sSQ
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
  name = VOL-UNC-PROD-01
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  0fec0486-7863-49bc-a4ab-d2c7ac48258a
  uuid = 0fec0486-7863-49bc-a4ab-d2c7ac48258a
  vguuid = ta2ZFC-UwtO-xANp-ZK19-rWVo-1kBJ-ZrsQrN
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = VOL-UNC-TEST
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  uuid = d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  vguuid = zyiwLa-STgp-v3r8-KJqj-1AIP-rC2k-4NuXqf
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
  name = VOL-UNC-PROD-02
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  uuid = 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  vguuid = 6itdyW-WLid-fTu6-HAJd-dpSj-HpbH-T4ZSVA
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = VOL-UNC-NAS-01
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  f9a0076d-e041-4d8f-a627-223562b84b90
  uuid = f9a0076d-e041-4d8f-a627-223562b84b90
  version = 0
  role = Regular
  remotePath = unc-srv-kman:/var/lib/exports/iso
  type = NFS
  class = Iso
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = ISO_DOMAIN
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  04d87d6c-092a-4568-bdca-1d6e8f231912
  uuid = 04d87d6c-092a-4568-bdca-1d6e8f231912
  version = 3
  role = Master
  remotePath = unc-srv-hyp1.cfu.local:/exports/import_domain
  type = NFS
  class 

Re: [ovirt-users] Reinstalling a new oVirt Manager

2015-03-18 Thread Simone Tiraboschi


- Original Message -
 From: VONDRA Alain avon...@unicef.fr
 To: VONDRA Alain avon...@unicef.fr, Simone Tiraboschi 
 stira...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, March 18, 2015 4:40:09 PM
 Subject: RE: [ovirt-users] Reinstalling a new oVirt Manager
 
 What do you think about it Simone ?
 Thanks
 
 
 
  Do you think that I can import the SDs in my new Data Center without
  any risk to destroy the VMs inside them ??
  Thank you for your advice.
 
  If no other engine is managing them, you can safely import the existing
  storage domain.
  As usual, is always a good idea to keep a backup.
 
 There is no other engine anymore, when you say to keep a backup, you talk
 about the engine or the SAN storage ???

If you are afraid to loose your VM images I talking about back-upping the 
storage.

 What I am afraid is about the Data in the iSCSI SAN, I can't make a backup of
 this.

A lot of SAN provides backup or snapshot capabilities.
Otherwise, once mounted, an iSCSI device is just a block device so, assuming 
that you have enough free space somewhere, at least you can still dump it.
 
 
 
  Alain VONDRA
  Chargé d'exploitation des Systèmes d'Information Direction
  Administrative et Financière
  +33 1 44 39 77 76
  UNICEF France
  3 rue Duguay Trouin  75006 PARIS
  www.unicef.fr
 
 
 
 
 
 
 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction Administrative et
 Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr
 
 
 
 
 
 
 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information
 Direction Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr
 
 
 
 
  -Message d'origine-
  De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la
  part de VONDRA Alain Envoyé : mardi 17 mars 2015 23:33 À : Simone
  Tiraboschi Cc : users@ovirt.org Objet : Re: [ovirt-users] Reinstalling
  a new oVirt Manager
 
  Ok great, I can see the different storage that I want to import, but I
  can't see the VMs on the un-connected pool.
  So anyway I have to go further and import the storage domain passing
  the alert message.
  Am I right ?
  No other way to see the VMs in SD :
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainsList
  7e40772a-fe94-4fb2-94c4-6198bed04a6a
  0fec0486-7863-49bc-a4ab-d2c7ac48258a
  d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  f9a0076d-e041-4d8f-a627-223562b84b90
  04d87d6c-092a-4568-bdca-1d6e8f231912
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  7e40772a-fe94-4fb2-94c4-6198bed04a6a
  uuid = 7e40772a-fe94-4fb2-94c4-6198bed04a6a
  vguuid = zwpINw-zgIR-oOjP-6voS-iA2n-zFRB-G76sSQ
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
  name = VOL-UNC-PROD-01
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  0fec0486-7863-49bc-a4ab-d2c7ac48258a
  uuid = 0fec0486-7863-49bc-a4ab-d2c7ac48258a
  vguuid = ta2ZFC-UwtO-xANp-ZK19-rWVo-1kBJ-ZrsQrN
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = VOL-UNC-TEST
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  uuid = d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  vguuid = zyiwLa-STgp-v3r8-KJqj-1AIP-rC2k-4NuXqf
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
  name = VOL-UNC-PROD-02
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  uuid = 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  vguuid = 6itdyW-WLid-fTu6-HAJd-dpSj-HpbH-T4ZSVA
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = VOL-UNC-NAS-01
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  f9a0076d-e041-4d8f-a627-223562b84b90
  uuid = f9a0076d-e041-4d8f-a627-223562b84b90
  version = 0
  role = Regular
  remotePath = unc-srv-kman:/var/lib/exports/iso
  type = NFS
  class = Iso
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = ISO_DOMAIN
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  04d87d6c-092a-4568-bdca-1d6e8f231912
  uuid = 04d87d6c-092a-4568-bdca-1d6e8f231912
  version = 3
  role = Master
  remotePath = unc-srv-hyp1.cfu.local:/exports/import_domain
  type = NFS
  class = Data
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  

Re: [ovirt-users] bonding 802.3ad mode

2015-03-18 Thread Alex Crow
The balancing on 802.3ad only occurs for different network flows based 
on a hash of source and destination MAC (or can be made to add IP 
addresses into the calculation). A single flow will only use a single 
NIC in ad mode.


Alex



On 18/03/15 16:17, Nathanaël Blanchet wrote:

Hi all,

I'm used to create a mode 4 bond0 interface with two 1 Gb/s interfaces 
on all my hosts, and ethtool bond0 gives me a functionnal 2000Mb/s. 
However, when importing a vm from the export domain (NFS with a speed 
of 4GB/s), I always have this alert:
Host siple has network interface which exceeded the defined threshold 
[95%] (em3: transmit rate[0%], receive rate [100%])
It seems that the second nic never works while the first one is 
overloaded.
Is it an expected behaviour? I believed that the flow was balanced 
between the two interfaces in 802.3ad mode.




--
This message has been scanned for viruses and
dangerous content by *MailScanner* http://www.mailscanner.info/, and is
believed to be clean.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
Transact is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 3.5.2 Second Release Candidate is now available for testing

2015-03-18 Thread Sandro Bonazzola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


The oVirt team is pleased to announce that the 3.5.2 Second Release Candidate 
is now
available for testing as of Mar 18th 2015.

The release candidate is available now for Fedora 20,
Red Hat Enterprise Linux 6.6, CentOS 6.6 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS 7 (or similar).

This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.

Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Live and oVirt Node ISO will be available soon as well[2].

Please note that mirrors[3] may need usually one day before being synchronized.

Please refer to the release notes for known issues in this release.

[1] http://www.ovirt.org/OVirt_3.5.2_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors

- -- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVCZq1AAoJEHw3u57E4QAOmj8P/2Nn3OcRKErMFXJLXeqZHaEi
vhk9MEiaSKMuPSLAfigvhAX5133czXlv1qmcLP8zQ4yqXdbkBXHptFQXWGRsFmH5
zhmy9bqUUlDFIcepMWw5Rm+LZRWKWwUVQUijnpah1l6d6w0YJoqy8hZxgeFtaFna
ytDlVBHpvLCubbjmCWRibOFd80P+z2pSk31/Y4k0vi/UIwMGnXO74vkBU+mzLuZO
mHVLueQjJORe6jEwoBeFFnen1QI/+uCpCCSm+inHoiSGxISnujhoZT1T9DmOPFJT
2i0cd7QnxmMo/aY8EoIEagCGXsIravgSW6NmrAXbJfKQ4NrylaAD/mfDhQy0tZLt
fRnANlDV2aHqdE0Cfw5LjzgOHgLrtG8cSCXL40tQI+/HGh1Av4OjnJQDbWL7dMIR
rxXn4t/hicIYPy17SM/+qo/2/2BSgR1jro17WgIn6nxAsV3YiXbHghRJ+uJRxFCJ
a8fT4vReHExRASdjjw+KMF0O3o4sqjd7ldX4aTDtuaEojYc3ducZVZDIT/L3SFPP
o+S6cWggWL/VZ7Ormts3Fkz6Il/pMAAre7NEuAk8gtARukmrMoBTz88TvvTQlNXg
ttynkryL/bPE6xsQ6nQbR0c7Iks3cYyg1T1VF7rbQyjD56X2wKJPCe+lX7Id0KqT
sJ5UIcdDI5VCHCElWzih
=QuV3
-END PGP SIGNATURE-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Reinstalling a new oVirt Manager

2015-03-18 Thread VONDRA Alain
Is there normal that in General tab of the storage Domain imported, Nothing is 
allocated :


Size:   2047 GB
Available:  1576 GB
Used:   471 GB
Allocated:  [N/A]
Over Allocation Ratio:  0%




 So I have imported the SD and no VMs are seen...
 What can I do ?
 I'm sure there are many VMs in this SD, why can I reach them ?
 I've put the SD in Maintenance, now.
 Help 


 What do you think about it Simone ?
 Thanks



  Do you think that I can import the SDs in my new Data Center without
  any risk to destroy the VMs inside them ??
  Thank you for your advice.

  If no other engine is managing them, you can safely import the
  existing storage domain.
  As usual, is always a good idea to keep a backup.

 There is no other engine anymore, when you say to keep a backup, you
 talk about the engine or the SAN storage ???

If you are afraid to loose your VM images I talking about back-upping the 
storage.

 What I am afraid is about the Data in the iSCSI SAN, I can't make a
 backup of this.

A lot of SAN provides backup or snapshot capabilities.
Otherwise, once mounted, an iSCSI device is just a block device so, assuming 
that you have enough free space somewhere, at least you can still dump it.

 
 
  Alain VONDRA
  Chargé d'exploitation des Systèmes d'Information Direction
  Administrative et Financière
  +33 1 44 39 77 76
  UNICEF France
  3 rue Duguay Trouin  75006 PARIS
  www.unicef.fr
 
 
 
 


 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr






 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information Direction Administrative et 
Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr




  -Message d'origine-
  De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la
  part de VONDRA Alain Envoyé : mardi 17 mars 2015 23:33 À : Simone
  Tiraboschi Cc : users@ovirt.org Objet : Re: [ovirt-users]
  Reinstalling a new oVirt Manager
 
  Ok great, I can see the different storage that I want to import, but
  I can't see the VMs on the un-connected pool.
  So anyway I have to go further and import the storage domain passing
  the alert message.
  Am I right ?
  No other way to see the VMs in SD :
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainsList
  7e40772a-fe94-4fb2-94c4-6198bed04a6a
  0fec0486-7863-49bc-a4ab-d2c7ac48258a
  d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  f9a0076d-e041-4d8f-a627-223562b84b90
  04d87d6c-092a-4568-bdca-1d6e8f231912
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  7e40772a-fe94-4fb2-94c4-6198bed04a6a
  uuid = 7e40772a-fe94-4fb2-94c4-6198bed04a6a
  vguuid = zwpINw-zgIR-oOjP-6voS-iA2n-zFRB-G76sSQ
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
  name = VOL-UNC-PROD-01
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  0fec0486-7863-49bc-a4ab-d2c7ac48258a
  uuid = 0fec0486-7863-49bc-a4ab-d2c7ac48258a
  vguuid = ta2ZFC-UwtO-xANp-ZK19-rWVo-1kBJ-ZrsQrN
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = VOL-UNC-TEST
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  uuid = d7b9d7cc-f7d6-43c7-ae13-e720951657c9
  vguuid = zyiwLa-STgp-v3r8-KJqj-1AIP-rC2k-4NuXqf
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['f422de63-8869-41ef-a782-8b0c9ee03c41']
  name = VOL-UNC-PROD-02
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  uuid = 1f6dec51-12a6-41ed-9d14-8f0ad4e062d2
  vguuid = 6itdyW-WLid-fTu6-HAJd-dpSj-HpbH-T4ZSVA
  state = OK
  version = 3
  role = Regular
  type = ISCSI
  class = Data
  pool = ['c58a44b1-1c98-450e-97e1-3347eeb28f86']
  name = VOL-UNC-NAS-01
 
  [root@unc-srv-hyp1  ~]$ vdsClient -s 0 getStorageDomainInfo
  f9a0076d-e041-4d8f-a627-223562b84b90
  uuid = f9a0076d-e041-4d8f-a627-223562b84b90
  version = 0
  role = Regular
  remotePath = unc-srv-kman:/var/lib/exports/iso
  type = NFS
  class = Iso
 

[ovirt-users] Live migration fails - domain not found -

2015-03-18 Thread Markus Stockhausen
Hi,


although we already upgraded several hypervisor nodes to Ovirt 3.5.1 
the newest upgrade has left the host in a very strang state. We did:

- Host was removed from cluster
- Ovirt 3.5 repo was activated on host
- Host was reinstalled from enging

And we got:
- A host that is active and looks nice in the engine
- We can start/stop VMs on the host
- But we cannot live migrate machines to (or even away from) the host

Attached vdsm/libvirt/engine logs. Timestamps do not match as we
created them individually during different runs.

Somhow lost ...

Markus

*
libvirt on target host:

2015-03-18 16:18:48.691+: 2093: debug : qemuMonitorJSONCommandWithFd:286 : 
Send command '{execute:qmp_capabilities,id:libvirt-1}' for write with 
FD -1
2015-03-18 16:18:48.691+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{QMP: {version: {qemu: {micro: 2, minor: 1, major: 2}, 
package: }, capabilities: []}}]
2015-03-18 16:18:48.691+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 105 bytes out of 105 available in buffer
2015-03-18 16:18:48.692+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{return: {}, id: libvirt-1}]
2015-03-18 16:18:48.692+: 2092: debug : qemuMonitorJSONIOProcessLine:199 : 
QEMU_MONITOR_RECV_REPLY: mon=0x7fb40c017670 reply={return: {}, id: 
libvirt-1}
2015-03-18 16:18:48.692+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 35 bytes out of 35 available in buffer
2015-03-18 16:18:48.692+: 2093: debug : qemuMonitorJSONCommandWithFd:291 : 
Receive command reply ret=0 rxObject=0x7fb445fbdb10
2015-03-18 16:18:48.692+: 2093: debug : qemuMonitorJSONCommandWithFd:286 : 
Send command '{execute:query-chardev,id:libvirt-2}' for write with FD -1
2015-03-18 16:18:48.693+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{return: [{frontend-open: false, filename: spicevmc, label: 
charchannel2}, {frontend-open: false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.org.qemu.guest_agent.0,server,
 label: charchannel1}, {frontend-open: false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.com.redhat.rhevm.vdsm,server,
 label: charchannel0}, {frontend-open: true, filename: 
unix:/var/lib/libvirt/qemu/colvm60.monitor,server, label: charmonitor}], 
id: libvirt-2}]
2015-03-18 16:18:48.693+: 2092: debug : qemuMonitorJSONIOProcessLine:199 : 
QEMU_MONITOR_RECV_REPLY: mon=0x7fb40c017670 reply={return: [{frontend-open: 
false, filename: spicevmc, label: charchannel2}, {frontend-open: 
false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.org.qemu.guest_agent.0,server,
 label: charchannel1}, {frontend-open: false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.com.redhat.rhevm.vdsm,server,
 label: charchannel0}, {frontend-open: true, filename: 
unix:/var/lib/libvirt/qemu/colvm60.monitor,server, label: charmonitor}], 
id: libvirt-2}
2015-03-18 16:18:48.693+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 559 bytes out of 559 available in buffer
2015-03-18 16:18:48.693+: 2093: debug : qemuMonitorJSONCommandWithFd:291 : 
Receive command reply ret=0 rxObject=0x7fb445ffe110
2015-03-18 16:18:48.694+: 2093: debug : qemuMonitorJSONCommandWithFd:286 : 
Send command 
'{execute:qom-list,arguments:{path:/machine/unattached/device[0]},id:libvirt-3}'
 for write with FD -1
2015-03-18 16:18:48.694+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 0 bytes out of 1023 available in buffer
2015-03-18 16:18:48.695+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{return: [{name: apic, type: childkvm-apic}, {name: 
filtered-features, type: X86CPUFeatureWordInfo}, {name: 
feature-words, type: X86CPUFeatureWordInfo}, {name: apic-id, type: 
int}, {name: tsc-frequency, type: int}, {name: model-id, type: 
string}, {name: vendor, type: string}, {name: xlevel, type: 
int}, {name: level, type: int}, {name: stepping, type: int}, 
{name: model, type: int}, {name: family, type: int}, {name: 
parent_bus, type: linkbus}, {name: kvm, type: bool}, {name: 
enforce, type: bool}, {name: check, type: bool}, {name: 
hv-time, type: bool}, {name: hv-vapic, type: bool}, {name: 
hv-relaxed, type: bool}, {name: hv-spinlocks, type: int}, 
{name: pmu, type: bool}, {name: hotplugged, type: bool}, 
{name: hotpluggable, type: bool}, {name: realized, type: bool}, 
{name: type, type: string}], id: libvirt-3}]
2015-03-18 16:18:48.695+: 2092: debug : qemuMonitorJSONIOProcessLine:199 : 
QEMU_MONITOR_RECV_REPLY: mon=0x7fb40c017670 reply={return: [{name: apic, 
type: childkvm-apic}, {name: filtered-features, type: 
X86CPUFeatureWordInfo}, {name: feature-words, type: 
X86CPUFeatureWordInfo}, {name: apic-id, type: int}, {name: 
tsc-frequency, type: int}, {name: model-id, type: string}, 
{name: vendor, type: string}, {name: xlevel, type: int}, 
{name: level, type: int}, {name: 

Re: [ovirt-users] Live migration fails - domain not found -

2015-03-18 Thread Markus Stockhausen
 Von: Paul Heinlein [heinl...@madboa.com]
 Gesendet: Mittwoch, 18. März 2015 18:43
 An: Markus Stockhausen
 Cc: Users@ovirt.org
 Betreff: Re: [ovirt-users] Live migration fails - domain not found -
 
 On Wed, 18 Mar 2015, Markus Stockhausen wrote:
 
  although we already upgraded several hypervisor nodes to Ovirt 3.5.1
  the newest upgrade has left the host in a very strang state. We did:
 
  - Host was removed from cluster
  - Ovirt 3.5 repo was activated on host
  - Host was reinstalled from enging
 
  And we got:
  - A host that is active and looks nice in the engine
  - We can start/stop VMs on the host
  - But we cannot live migrate machines to (or even away from) the host
 
 Are the source and destination hypervisor hosts running the OS
 revision (e.g., both running CentOS 6.6)?

Yes both are FC20 (+virt-preview). In between we found the error. It was
a network issue on the migration network that became clear after we
analyzed the vdsm logs on the migration source host. I opened a RFE 
to identify the issue better next time.

https://bugzilla.redhat.com/show_bug.cgi?id=1203417

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [QE] oVirt 3.6.0 status

2015-03-18 Thread Sandro Bonazzola
Hi, here's an update on 3.6 status on integration / rel-eng side
The tracker bug for 3.6.0 [1] currently shows no blockers.

There are 579 bugs [2] targeted to 3.6.0.

NEW ASSIGNEDPOSTTotal
docs11  0   0   11
gluster 35  2   1   38
i18n2   0   0   2
infra   82  7   8   97
integration 64  5   6   75
network 39  1   9   49
node27  3   3   33
ppc 0   0   1   1
sla 52  3   2   57
spice   1   0   0   1
storage 72  5   7   84
ux  33  0   10  43
virt73  5   10  88
Total   491 31  57  579


Features submission is still open until 2015-04-22 as per current release 
schedule.
Maintainers: be sure to have your features tracked in the google doc[3]

[1] https://bugzilla.redhat.com/1155425
[2] 
https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_release%3A3.6.0%20Product%3AoVirt%20status%3Anew%2Cassigned%2Cpost
[3] http://goo.gl/9X3G49

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failed to start | Bad volume specification

2015-03-18 Thread Michal Skrivanek

On Mar 18, 2015, at 03:33 , Punit Dambiwal hypu...@gmail.com wrote:

 Hi,
 
 Is there any one from community can help me to solve this issue...??
 
 Thanks,
 Punit
 
 On Tue, Mar 17, 2015 at 12:52 PM, Punit Dambiwal hypu...@gmail.com wrote:
 Hi,
 
 I am facing one strange issue with ovirt/glusterfsstill didn't find this 
 issue is related with glusterfs or Ovirt
 
 Ovirt :- 3.5.1
 Glusterfs :- 3.6.1
 Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
 Guest VM :- more then 100
 
 Issue :- When i deploy this cluster first time..it work well for me(all the 
 guest VM created and running successfully)but suddenly one day my one of 
 the host node rebooted and none of the VM can boot up now...and failed with 
 the following error Bad Volume Specification
 
 VMId :- d877313c18d9783ca09b62acf5588048
 
 VDSM Logs :- http://ur1.ca/jxabi

you've got timeouts while accessing storage…so I guess something got messed up 
on reboot, it may also be just a gluster misconfiguration…

 Engine Logs :- http://ur1.ca/jxabv
 
 
 [root@cpu01 ~]# vdsClient -s 0 getVolumeInfo 
 e732a82f-bae9-4368-8b98-dedc1c3814de 0002-0002-0002-0002-0145 
 6d123509-6867-45cf-83a2-6d679b77d3c5 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
 status = OK
 domain = e732a82f-bae9-4368-8b98-dedc1c3814de
 capacity = 21474836480
 voltype = LEAF
 description =
 parent = ----
 format = RAW
 image = 6d123509-6867-45cf-83a2-6d679b77d3c5
 uuid = 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
 disktype = 2
 legality = LEGAL
 mtime = 0
 apparentsize = 21474836480
 truesize = 4562972672
 type = SPARSE
 children = []
 pool =
 ctime = 1422676305
 -
 
 I opened same thread earlier but didn't get any perfect answers to solve this 
 issue..so i reopen it...
 
 https://www.mail-archive.com/users@ovirt.org/msg25011.html
 
 Thanks,
 Punit
 
 
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failed to start | Bad volume specification

2015-03-18 Thread Punit Dambiwal
Hi All,

Is there any one have any idea about this problem...it seems it's bug
either in Ovirt or Glusterfs...that's why no one has the idea about
itplease correct me if i am wrong

Thanks,
Punit

On Wed, Mar 18, 2015 at 5:05 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi Michal,

 Would you mind to let me know the possible messedup things...i will check
 and try to resolve itstill i am communicating gluster community to
 resolve this issue...

 But in the ovirtgluster setup is quite straightso how come it will
 be messedup with reboot ?? if it can be messedup with reboot then it seems
 not good and stable technology for the production storage

 Thanks,
 Punit

 On Wed, Mar 18, 2015 at 3:51 PM, Michal Skrivanek 
 michal.skriva...@redhat.com wrote:


 On Mar 18, 2015, at 03:33 , Punit Dambiwal hypu...@gmail.com wrote:

  Hi,
 
  Is there any one from community can help me to solve this issue...??
 
  Thanks,
  Punit
 
  On Tue, Mar 17, 2015 at 12:52 PM, Punit Dambiwal hypu...@gmail.com
 wrote:
  Hi,
 
  I am facing one strange issue with ovirt/glusterfsstill didn't find
 this issue is related with glusterfs or Ovirt
 
  Ovirt :- 3.5.1
  Glusterfs :- 3.6.1
  Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
  Guest VM :- more then 100
 
  Issue :- When i deploy this cluster first time..it work well for me(all
 the guest VM created and running successfully)but suddenly one day my
 one of the host node rebooted and none of the VM can boot up now...and
 failed with the following error Bad Volume Specification
 
  VMId :- d877313c18d9783ca09b62acf5588048
 
  VDSM Logs :- http://ur1.ca/jxabi

 you've got timeouts while accessing storage…so I guess something got
 messed up on reboot, it may also be just a gluster misconfiguration…

  Engine Logs :- http://ur1.ca/jxabv
 
  
  [root@cpu01 ~]# vdsClient -s 0 getVolumeInfo
 e732a82f-bae9-4368-8b98-dedc1c3814de 0002-0002-0002-0002-0145
 6d123509-6867-45cf-83a2-6d679b77d3c5 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
  status = OK
  domain = e732a82f-bae9-4368-8b98-dedc1c3814de
  capacity = 21474836480
  voltype = LEAF
  description =
  parent = ----
  format = RAW
  image = 6d123509-6867-45cf-83a2-6d679b77d3c5
  uuid = 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
  disktype = 2
  legality = LEGAL
  mtime = 0
  apparentsize = 21474836480
  truesize = 4562972672
  type = SPARSE
  children = []
  pool =
  ctime = 1422676305
  -
 
  I opened same thread earlier but didn't get any perfect answers to
 solve this issue..so i reopen it...
 
  https://www.mail-archive.com/users@ovirt.org/msg25011.html
 
  Thanks,
  Punit
 
 
 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to read VM '[Empty Name]' OVF, it may be corrupted

2015-03-18 Thread Jon Archer
I did take a quick look through the ovf file but wasn't sure what would 
be out of order and also with it being every vm that is exported, and 
I've exported each VM several times I did wonder whether that would be 
the issue.


Nevertheless see attached for an example ovf.

Thanks

Jon

On 18/03/15 11:05, Tomas Jelinek wrote:

Hi Jon,

could you please attach the ovf files here?
Somewhere in the export domain you should have files with .ovf extension which 
is an XML describing the VM. I'd say they will be corrupted.

Thanx,
Tomas

- Original Message -

From: Jon Archer j...@rosslug.org.uk
To: users@ovirt.org
Sent: Wednesday, March 18, 2015 12:36:14 AM
Subject: [ovirt-users] Failed to read VM '[Empty Name]' OVF,it may be 
corrupted

Hi all,

seing a strange issue here, I'm currently in the process of migrating
from one ovirt setup to another and having trouble with the
export/import process.

The new setup is a 3.5 install with hosted engine and glusterfs the old
one is running on a nightly release (not too recent)

I have brought up an NFS export on the existing storage on the old
setup, successfully exported a number of VM's and imported them onto the
new system.

However I came to move the last 4 VM's and am seeing an issue where
after attaching the export storage to the new setup I see no VMs in the
export storage to import and see this in the log:
2015-03-17 23:30:56,742 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--127.0.0.1-8702-8) START, GetVmsInfoVDSCommand( storagePoolId =
0002-0002-0002-0002-0209, ignoreFailoverLimit = false,
storageDomainId = 86f85b1d-a9ef-4106-a4bf-eae19722d28a, vmIdList =
null), log id: e2a32ac
2015-03-17 23:30:56,766 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--127.0.0.1-8702-8) FINISH, GetVmsInfoVDSCommand, log id: e2a32ac
2015-03-17 23:30:56,798 ERROR
[org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
Error parsing OVF due to 2
2015-03-17 23:30:56,818 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
corrupted
2015-03-17 23:30:56,867 ERROR
[org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
Error parsing OVF due to 2
2015-03-17 23:30:56,884 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
corrupted
2015-03-17 23:30:56,905 ERROR
[org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
Error parsing OVF due to 2
2015-03-17 23:30:56,925 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
corrupted
2015-03-17 23:30:56,943 ERROR
[org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
Error parsing OVF due to 2
2015-03-17 23:30:56,992 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
corrupted
2015-03-17 23:30:57,012 ERROR
[org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
Error parsing OVF due to 2
2015-03-17 23:30:57,033 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
corrupted
2015-03-17 23:30:57,071 ERROR
[org.ovirt.engine.core.utils.ovf.OvfManager] (ajp--127.0.0.1-8702-8)
Error parsing OVF due to 2
2015-03-17 23:30:57,091 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Failed to read VM '[Empty Name]' OVF, it may be
corrupted


I've brought up new export storage domains on both the new and old
cluster (and a seperate storage array for that matter) all resulting the
same messages.

Anyone any thoughts on these errors?

Thanks

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



?xml version='1.0' encoding='UTF-8'?
ovf:Envelope xmlns:ovf=http://schemas.dmtf.org/ovf/envelope/1/; xmlns:rasd=http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData; xmlns:vssd=http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; ovf:version=3.5.0.0ReferencesFile ovf:href=2306daab-240d-4be1-81e6-04001cd5da7e/84c501cd-f2c3-4eeb-b585-b3e6e79413a8 ovf:id=84c501cd-f2c3-4eeb-b585-b3e6e79413a8 

[ovirt-users] 答复: bonding 802.3ad mode

2015-03-18 Thread Xie, Chao
Yeah, Alex is right. And if you want to double the network’s speed in single 
flow, the mode 0 is only choice. But mode 0 seems not be supported in oVirt?

发件人: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] 代表 Alex Crow
发送时间: 2015年3月19日 0:25
收件人: users@ovirt.org
主题: Re: [ovirt-users] bonding 802.3ad mode

The balancing on 802.3ad only occurs for different network flows based on a 
hash of source and destination MAC (or can be made to add IP addresses into the 
calculation). A single flow will only use a single NIC in ad mode.

Alex


On 18/03/15 16:17, Nathanaël Blanchet wrote:
Hi all,

I'm used to create a mode 4 bond0 interface with two 1 Gb/s interfaces on all 
my hosts, and ethtool bond0 gives me a functionnal 2000Mb/s. However, when 
importing a vm from the export domain (NFS with a speed of 4GB/s), I always 
have this alert:
Host siple has network interface which exceeded the defined threshold [95%] 
(em3: transmit rate[0%], receive rate [100%])
It seems that the second nic never works while the first one is overloaded.
Is it an expected behaviour? I believed that the flow was balanced between the 
two interfaces in 802.3ad mode.


--
This message has been scanned for viruses and
dangerous content by MailScannerhttp://www.mailscanner.info/, and is
believed to be clean.



___

Users mailing list

Users@ovirt.orgmailto:Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users



--

This message is intended only for the addressee and may contain

confidential information. Unless you are that person, you may not

disclose its contents or use it in any way and are requested to delete

the message along with any attachments and notify us immediately.

Transact is operated by Integrated Financial Arrangements plc. 29

Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608

5300. (Registered office: as above; Registered in England and Wales

under number: 3727592). Authorised and regulated by the Financial

Conduct Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 答复: bonding 802.3ad mode

2015-03-18 Thread Dan Yasny
Mode 0 is not supported under a bridge, just like mode 6

On Wed, Mar 18, 2015 at 10:47 PM, Xie, Chao xiec.f...@cn.fujitsu.com
wrote:

  Yeah, Alex is right. And if you want to double the network’s speed in
 single flow, the mode 0 is only choice. But mode 0 seems not be supported
 in oVirt?



 *发件人:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *代表 *Alex
 Crow
 *发送时间:* 2015年3月19日 0:25
 *收件人:* users@ovirt.org
 *主题:* Re: [ovirt-users] bonding 802.3ad mode



 The balancing on 802.3ad only occurs for different network flows based on
 a hash of source and destination MAC (or can be made to add IP addresses
 into the calculation). A single flow will only use a single NIC in ad mode.

 Alex


  On 18/03/15 16:17, Nathanaël Blanchet wrote:

 Hi all,

 I'm used to create a mode 4 bond0 interface with two 1 Gb/s interfaces on
 all my hosts, and ethtool bond0 gives me a functionnal 2000Mb/s. However,
 when importing a vm from the export domain (NFS with a speed of 4GB/s), I
 always have this alert:

 Host siple has network interface which exceeded the defined threshold
 [95%] (em3: transmit rate[0%], receive rate [100%])
 It seems that the second nic never works while the first one is overloaded.
 Is it an expected behaviour? I believed that the flow was balanced between
 the two interfaces in 802.3ad mode.



 --
 This message has been scanned for viruses and
 dangerous content by *MailScanner* http://www.mailscanner.info/, and is
 believed to be clean.


  ___

 Users mailing list

 Users@ovirt.org

 http://lists.ovirt.org/mailman/listinfo/users



  --

 This message is intended only for the addressee and may contain

 confidential information. Unless you are that person, you may not

 disclose its contents or use it in any way and are requested to delete

 the message along with any attachments and notify us immediately.

 Transact is operated by Integrated Financial Arrangements plc. 29

 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608

 5300. (Registered office: as above; Registered in England and Wales

 under number: 3727592). Authorised and regulated by the Financial

 Conduct Authority (entered on the Financial Services Register; no. 190856).


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration fails - domain not found -

2015-03-18 Thread Paul Heinlein

On Wed, 18 Mar 2015, Markus Stockhausen wrote:

although we already upgraded several hypervisor nodes to Ovirt 3.5.1 
the newest upgrade has left the host in a very strang state. We did:


- Host was removed from cluster
- Ovirt 3.5 repo was activated on host
- Host was reinstalled from enging

And we got:
- A host that is active and looks nice in the engine
- We can start/stop VMs on the host
- But we cannot live migrate machines to (or even away from) the host


Are the source and destination hypervisor hosts running the OS 
revision (e.g., both running CentOS 6.6)?


--
Paul Heinlein
heinl...@madboa.com
45°38' N, 122°6' W___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users