Re: [Users] ovirt 3.3 vnic profile with bonding vlans problem

2014-04-04 Thread Sven Kieske
Hi,

and thanks for your effort.

The root cause for this was found with help
via IRC from apuimedo (Thanks again!)

ovirt utilizes libvirt for the network
qos, which just sets it for protocol ip
and not for the whole device.

This leads to unrestricted IPv6 traffic outbound.

I'm currently trying to write a hook
to alter the tc filters for the vms, so
they get per device restricted rather than
per protocol (which makes no sense at all TBH).

When I got the time I'll file a BZ too.

Thank you very, very much again, apuimedo
for pushing me in the right direction
and even proposing a solution!

Am 03.04.2014 19:50, schrieb Gilad Chaplik:
 Hi Sven,
 
 disclaimer, not familiar with this feature that much (although I should), 
 looks like the problem is in libvirt (according to your story).
 googling 'outbound libvirt not working' shows that you're not the only one :)
 
 http://www.redhat.com/archives/libvir-list/2011-August/msg00341.html
 http://www.redhat.com/archives/libvir-list/2012-June/msg01306.html
 
 Thanks, 
 Gilad. 

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] 3.5 virt feature overview

2014-04-04 Thread michal . skrivanek
BEGIN:VCALENDAR
PRODID:Zimbra-Calendar-Provider
VERSION:2.0
METHOD:REQUEST
BEGIN:VTIMEZONE
TZID:Europe/Belgrade
BEGIN:STANDARD
DTSTART:16010101T03
TZOFFSETTO:+0100
TZOFFSETFROM:+0200
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=10;BYDAY=-1SU
TZNAME:CET
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETTO:+0200
TZOFFSETFROM:+0100
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=3;BYDAY=-1SU
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:07e7b6c9-ea50-4c4a-81de-8fc6fbf41e4e
SUMMARY:3.5 virt feature overview
ATTENDEE;CN=Tomas Jelinek;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TR
 UE:mailto:tjeli...@redhat.com
ATTENDEE;CN=Omer Frenkel;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRU
 E:mailto:ofren...@redhat.com
ATTENDEE;CN=users@oVirt.org;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE:mailto:users@ovirt.org
ORGANIZER;CN=Michal Skrivanek:mailto:michal.skriva...@redhat.com
DTSTART;TZID=Europe/Belgrade:20140410T16
DTEND;TZID=Europe/Belgrade:20140410T164500
STATUS:CONFIRMED
CLASS:PUBLIC
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
TRANSP:OPAQUE
LAST-MODIFIED:20140404T075150Z
DTSTAMP:20140404T075150Z
SEQUENCE:0
DESCRIPTION:The following is a new meeting request:\n\nSubject: 3.5 virt fea
 ture overview \nOrganizer: Michal Skrivanek michal.skriva...@redhat.com 
 \n\nTime: Thursday\, April 10\, 2014\, 4:00:00 PM - 4:45:00 PM GMT +01:00 Be
 lgrade\, Bratislava\, Budapest\, Ljubljana\, Prague\n \nRequired: tjelinek@r
 edhat.com\; ofren...@redhat.com \nOptional: users@ovirt.org \n\n*~*~*~*~*~*~
 *~*~*~*\n\nHi all\, \n\nWe will present virt features for version 3.5: \n\n*
  Edit running VM\, Omer\, ~5min \n* Instance Types\, Tomas\, ~15min \n* all 
 the other features:)\, Michal\, ~10min  \n* QA\, ~10min\n\nDial in: \nhttps
 ://www.intercallonline.com/listNumbersByCode.action?confCode=7948954260 \nco
 nf id: 794 895 4260 # \n\nThanks\,\nmichal
END:VEVENT
END:VCALENDAR___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] scheduling storage migration

2014-04-04 Thread Jorick Astrego
Hi,

I don't know if it's possible yet but can we schedule the (live) storage
migration?

It would be awesome to have for example some VM data on SSD storage that
migrates to HDD storage when the VM is shutdown. Or have VM's with high
IO load during specific times migrate to a high IO storage domain during
these hours.

I realize it will generate extra load while migrating but this can be
planned for. Maybe the guys from glusterfs could enable storage
migration on their side so the migration can execute on the storage
server triggered by ovirt, that would be even better performance wise.

Kind regards,

Jorick Astrego
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Shrinking virtual size of VM

2014-04-04 Thread Yusufi M R
Hello Everyone,

Is it possible to shrink the virtual disk size of VM.

Regards,
Yusuf
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] [QE] oVirt 3.3.5 status

2014-04-04 Thread Sandro Bonazzola
Hi,
  we're going to start composing 3.3.5 GA yum repository on 2014-04-09 09:00 UTC
following the published timeline [1]

A bug tracker is available at [2] and it shows no bugs blocking the release

All non-blocking bugs still open with target 3.3.5 have been re-targeted.

Maintainers:
Please build packages to be included in this RC *BEFORE* 2014-04-09 09:00 UTC.
Please fill release notes, the page has been created here [4]

For those who want to help testing the release, I suggest to add yourself to 
the testing page [3].
RC repository is available as described in [4]
Nightly builds are also available as described in [1]

[1] http://www.ovirt.org/OVirt_3.3.z_release-management
[2] http://bugzilla.redhat.com/1071867
[3] http://www.ovirt.org/Testing/Ovirt_3.3.5_testing
[4] http://www.ovirt.org/OVirt_3.3.5_release_notes

Thanks,
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Fail snapshot

2014-04-04 Thread Kevin Tibi
Hi,

I have a pb when i try to snapshot a VM.

Ovirt engine self hosted 3.4. Two node (host01 and host02).

my engine.log :

2014-04-04 12:30:03,013 INFO
 [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-6-thread-24) Ending command successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand
2014-04-04 12:30:03,028 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) START, SnapshotVDSCommand(HostName =
host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81,
vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) Failed in SnapshotVDS method
2014-04-04 12:30:03,076 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48,
mMessage=Snapshot failed]]
2014-04-04 12:30:03,077 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) Command SnapshotVDSCommand(HostName =
host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81,
vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-04 12:30:03,080 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) FINISH, SnapshotVDSCommand, log id:
36463977
2014-04-04 12:30:03,083 WARN
 [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-6-thread-24) Wasnt able to live snapshot due to
error: VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48).
VM will still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-24) Correlation ID: 5650b99f, Job ID:
c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba, Call Stack:
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)


My /var/log/messages

Apr  4 12:30:04 host01 vdsm vm.Vm ERROR
vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist:
{'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f',
'volumeID': '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID':
'646df162-5c6d-44b1-bc47-b63c3fdab0e2'}

My /var/log/libvirt/libvirt.log

2014-04-04 10:40:13.886+: 8234: debug : qemuMonitorIOWrite:462 :
QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0
buf={execute:query-blockstats,id:libvirt-20842}
 len=53 ret=53 errno=11
2014-04-04 10:40:13.888+: 8234: debug : qemuMonitorIOProcess:354 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={return: [{device:
drive-ide0-1-0, parent: {stats: {flush_total_time_ns: 0,
wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0,
rd_total_time_ns: 0, flush_operations: 0, wr_operations: 0,
rd_bytes: 0, rd_operations: 0}}, stats: {flush_total_time_ns: 0,
wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0,
rd_total_time_ns: 11929902, flush_operations: 0, wr_operations: 0,
rd_bytes: 135520, rd_operations: 46}}, {device: drive-virtio-disk0,
parent: {stats: {flush_total_time_ns: 0, wr_highest_offset:
22184332800, wr_total_time_ns: 0, wr_bytes: 0, rd_total_time_ns: 0,
flush_operations: 0, wr_operations: 0, rd_bytes: 0, rd_operations:
0}}, stats: {flush_total_time_ns: 34786515034, wr_highest_offset:
22184332800, wr_total_time_ns: 5131205369094, wr_bytes: 5122065408,
rd_total_time_ns: 12987633373, flush_operations: 285398,
wr_operations: 401232, rd_bytes: 392342016, rd_operations: 15069}}],
id: libvirt-20842}
 len=1021
2014-04-04 10:40:13.888+: 8263: debug :
qemuMonitorGetBlockStatsInfo:1478 : mon=0x7f77ec0ccce0 dev=ide0-1-0
2014-04-04 10:40:13.889+: 8263: debug : qemuMonitorSend:904 :
QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0
msg={execute:query-blockstats,id:libvirt-20843}

/var/log/vdsm/vdsm.log
Thread-4732::DEBUG::2014-04-04
12:43:34,439::BindingXMLRPC::1067::vds::(wrapper) client
[192.168.99.104]::call vmSnapshot with
('cb038ccf-6c6f-475c-872f-ea812ff795a1', [{'baseVolumeID':
'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'domainID':
'5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID':
'f5fc4fed-4acd-46e8-9980-90a9c3985840', 'imageID':

Re: [Users] ovirt 3.3 vnic profile with bonding vlans problem

2014-04-04 Thread Sven Kieske
Well I created BZs for ovirt
and libvirt for this apparently
almost completely broken feature:

libvirt:
https://bugzilla.redhat.com/show_bug.cgi?id=108
ovirt:
https://bugzilla.redhat.com/show_bug.cgi?id=1084448
-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] scheduling storage migration

2014-04-04 Thread Itamar Heim

On 04/04/2014 10:51 AM, Jorick Astrego wrote:

Hi,

I don't know if it's possible yet but can we schedule the (live) storage
migration?

It would be awesome to have for example some VM data on SSD storage that
migrates to HDD storage when the VM is shutdown. Or have VM's with high
IO load during specific times migrate to a high IO storage domain during
these hours.


if the VM is down, you can move it, not (live) storage migration.



I realize it will generate extra load while migrating but this can be
planned for. Maybe the guys from glusterfs could enable storage
migration on their side so the migration can execute on the storage
server triggered by ovirt, that would be even better performance wise.


in any case, you can script anything with the API...

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HA

2014-04-04 Thread Koen Vanoppen
So... It is possible for a fully automatic migration of the VM to another
hypervisor in case Storage connection fails?
How can we make this happen? Because for the moment, when we tested the
situation they stayed in pause state.
(Test situation:

   - Unplug the 2 fibre cables from the hypervisor
   - VM's go in pause state
   - VM's stayed in pause state until the failure was solved

)


They only returned when we restored the fiber connection to the
Hypervisor...

Kind Regards,

Koen



2014-04-04 13:52 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com:

 So... It is possible for a fully automatic migration of the VM to another
 hypervisor in case Storage connection fails?
 How can we make this happen? Because for the moment, when we tested the
 situation they stayed in pause state.
 (Test situation:

- Unplug the 2 fibre cables from the hypervisor
- VM's go in pause state
- VM's stayed in pause state until the failure was solved

 )


 They only returned when we restored the fiber connection to the
 Hypervisor...

 Kind Regards,

 Koen


 2014-04-03 16:53 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com:

 -- Forwarded message --
 From: Doron Fediuck dfedi...@redhat.com
 Date: Apr 3, 2014 4:51 PM
 Subject: Re: [Users] HA
 To: Koen Vanoppen vanoppen.k...@gmail.com
 Cc: Omer Frenkel ofren...@redhat.com, users@ovirt.org, Federico
 Simoncelli fsimo...@redhat.com, Allon Mureinik amure...@redhat.com



 - Original Message -
  From: Koen Vanoppen vanoppen.k...@gmail.com
  To: Omer Frenkel ofren...@redhat.com, users@ovirt.org
  Sent: Wednesday, April 2, 2014 4:17:36 PM
  Subject: Re: [Users] HA
 
  Yes, indeed. I meant not-operational. Sorry.
  So, if I understand this correctly. When we ever come in a situation
 that we
  loose both storage connections on our hypervisor, we will have to
 manually
  restore the connections first?
 
  And thanx for the tip for speeding up thins :-).
 
  Kind regards,
 
  Koen
 
 
  2014-04-02 15:14 GMT+02:00 Omer Frenkel  ofren...@redhat.com  :
 
 
 
 
 
  - Original Message -
   From: Koen Vanoppen  vanoppen.k...@gmail.com 
   To: users@ovirt.org
   Sent: Wednesday, April 2, 2014 4:07:19 PM
   Subject: [Users] HA
  
   Dear All,
  
   Due our acceptance testing, we discovered something. (Document will
   follow).
   When we disable one fiber path, no problem multipath finds it way no
 pings
   are lost.
   BUT when we disabled both the fiber paths (so one of the storage
 domain is
   gone on this host, but still available on the other host), vms go in
 paused
   mode... He chooses a new SPM (can we speed this up?), put's the host
 in
   non-responsive (can we speed this up, more important) and the VM's
 stay on
   Paused mode... I would expect that they would be migrated (yes, HA is
 
  i guess you mean the host moves to not-operational (in contrast to
  non-responsive)?
  if so, the engine will not migrate vms that are paused to do io error,
  because of data corruption risk.
 
  to speed up you can look at the storage domain monitoring timeout:
  engine-config --get StorageDomainFalureTimeoutInMinutes
 
 
   enabled) to the other host and reboot there... Any solution? We are
 still
   using oVirt 3.3.1 , but we are planning a upgrade to 3.4 after the
 easter
   holiday.
  
   Kind Regards,
  
   Koen
  

 Hi Koen,
 Resuming from paused due to io issues is supported (adding relevant
 folks).
 Regardless, if you did not define power management, you should manually
 approve
 source host was rebooted in order for migration to proceed. Otherwise we
 risk
 split-brain scenario.

 Doron



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HA

2014-04-04 Thread Itamar Heim

On 04/04/2014 03:21 PM, Koen Vanoppen wrote:

So... It is possible for a fully automatic migration of the VM to
another hypervisor in case Storage connection fails?
How can we make this happen? Because for the moment, when we tested the
situation they stayed in pause state.
(Test situation:

  * Unplug the 2 fibre cables from the hypervisor
  * VM's go in pause state
  * VM's stayed in pause state until the failure was solved

)


the KVM team advised this would be an unsafe migration. iirc, since IO 
can be stuck at kernel level, pending write to the storage, which would 
cause corruption if storage is recovered while the VM is now running on 
another machine.





They only returned when we restored the fiber connection to the
Hypervisor...

Kind Regards,

Koen



2014-04-04 13:52 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com
mailto:vanoppen.k...@gmail.com:

So... It is possible for a fully automatic migration of the VM to
another hypervisor in case Storage connection fails?
How can we make this happen? Because for the moment, when we tested
the situation they stayed in pause state.
(Test situation:

  * Unplug the 2 fibre cables from the hypervisor
  * VM's go in pause state
  * VM's stayed in pause state until the failure was solved

)


They only returned when we restored the fiber connection to the
Hypervisor...

Kind Regards,

Koen


2014-04-03 16:53 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com
mailto:vanoppen.k...@gmail.com:

-- Forwarded message --
From: Doron Fediuck dfedi...@redhat.com
mailto:dfedi...@redhat.com
Date: Apr 3, 2014 4:51 PM
Subject: Re: [Users] HA
To: Koen Vanoppen vanoppen.k...@gmail.com
mailto:vanoppen.k...@gmail.com
Cc: Omer Frenkel ofren...@redhat.com
mailto:ofren...@redhat.com, users@ovirt.org
mailto:users@ovirt.org, Federico Simoncelli
fsimo...@redhat.com mailto:fsimo...@redhat.com, Allon
Mureinik amure...@redhat.com mailto:amure...@redhat.com



- Original Message -
  From: Koen Vanoppen vanoppen.k...@gmail.com
mailto:vanoppen.k...@gmail.com
  To: Omer Frenkel ofren...@redhat.com
mailto:ofren...@redhat.com, users@ovirt.org
mailto:users@ovirt.org
  Sent: Wednesday, April 2, 2014 4:17:36 PM
  Subject: Re: [Users] HA
 
  Yes, indeed. I meant not-operational. Sorry.
  So, if I understand this correctly. When we ever come in a
situation that we
  loose both storage connections on our hypervisor, we will
have to manually
  restore the connections first?
 
  And thanx for the tip for speeding up thins :-).
 
  Kind regards,
 
  Koen
 
 
  2014-04-02 15:14 GMT+02:00 Omer Frenkel  ofren...@redhat.com
mailto:ofren...@redhat.com  :
 
 
 
 
 
  - Original Message -
   From: Koen Vanoppen  vanoppen.k...@gmail.com
mailto:vanoppen.k...@gmail.com 
   To: users@ovirt.org mailto:users@ovirt.org
   Sent: Wednesday, April 2, 2014 4:07:19 PM
   Subject: [Users] HA
  
   Dear All,
  
   Due our acceptance testing, we discovered something.
(Document will
   follow).
   When we disable one fiber path, no problem multipath finds
it way no pings
   are lost.
   BUT when we disabled both the fiber paths (so one of the
storage domain is
   gone on this host, but still available on the other host),
vms go in paused
   mode... He chooses a new SPM (can we speed this up?), put's
the host in
   non-responsive (can we speed this up, more important) and
the VM's stay on
   Paused mode... I would expect that they would be migrated
(yes, HA is
 
  i guess you mean the host moves to not-operational (in
contrast to
  non-responsive)?
  if so, the engine will not migrate vms that are paused to do
io error,
  because of data corruption risk.
 
  to speed up you can look at the storage domain monitoring
timeout:
  engine-config --get StorageDomainFalureTimeoutInMinutes
 
 
   enabled) to the other host and reboot there... Any
solution? We are still
   using oVirt 3.3.1 , but we are planning a upgrade to 3.4
after the easter
   holiday.
  
   Kind Regards,
  
   Koen
  

Hi Koen,
Resuming from paused due to io issues is supported (adding
relevant folks).
Regardless, if you did not define power management, you should
manually approve
source host was rebooted in order 

Re: [Users] scheduling storage migration

2014-04-04 Thread Itamar Heim

On 04/04/2014 03:34 PM, Jorick Astrego wrote:


On Fri, 2014-04-04 at 15:02 +0300, Itamar Heim wrote:

On 04/04/2014 10:51 AM, Jorick Astrego wrote:
 Hi,

 I don't know if it's possible yet but can we schedule the (live) storage
 migration?

 It would be awesome to have for example some VM data on SSD storage that
 migrates to HDD storage when the VM is shutdown. Or have VM's with high
 IO load during specific times migrate to a high IO storage domain during
 these hours.

if the VM is down, you can move it, not (live) storage migration.

 I realize it will generate extra load while migrating but this can be
 planned for. Maybe the guys from glusterfs could enable storage
 migration on their side so the migration can execute on the storage
 server triggered by ovirt, that would be even better performance wise.

in any case, you can script anything with the API...



I was being lazy... So I can use the
http://www.ovirt.org/Features/oVirtSchedulerAPI for this? Or do I have
to hack around with the engine api. I will spend some time diving into it.


you can use the scheduler api, but I don't remember it having an event 
for OnVmStop.
you can also use it for a periodic balancing call, but for that, you 
can also just use cron.
in either case you will need to do the relevant api calls for moving the 
disks around. i'd go with the cron approach.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Conf access rights

2014-04-04 Thread Itamar Heim

On 04/03/2014 06:19 PM, René Koch wrote:

Hi Kevin,

On 04/03/2014 04:11 PM, Kevin Tibi wrote:

Hi,

I am contacting you because I can not do what I want. I have a
hosted-engine. I have a data center with a cluster that has two nodes. I
have a ipa-server for authentication. I want a user can create VMs. I
would like he could just manage its VMs and not see others. And more, I
would like to put a quota per user.


You can assign quotas per user. Quotas are disabled per default, so you
have to enabled it for your datacenter.

I also suggest to assign permissions for your users to self provisioning
portal, then they can only see/create/start/stop their vms. In webadmin
portal users can always see all vms.


which would be done by assigning the user from IPA a Power User 
UserRole at the DC level, then enabling quota on the DC, then creating a 
quota, and giving that user a permission to consume it.


did you add the IPA domain to the engine via the engine-manage-domains 
utility?





Regards,
René




Is it possible today?

Thx,
Kevin.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] how do you manually add ovirtmgmt network to a node/host

2014-04-04 Thread Jeremiah Jahn
Trying to install ovirt node on an already existing kvm host.  Used
the New wizard from the engine. Things seemed to go ok ish until it
got to the point where it wanted to install networking. No real
errors, but has One of the Logical Networks defined for this Cluster
is Unreachable by the Host.  error sitting on it. I tried dragging
that network onto one of the interfaces of my host, which already has
an IP address, as that's how I ssh to it, and it then took down the
interface and tried to dhcp an ip for it which failed after a while,
and resulted in udev stuck in an infinite loop taking down and
bringing up said interface.

Not really sure what's going on or what it's trying to accomplish.  I
set the ovrtmgmt network to have the same vlan id as the ethernet
device i dragged it onto.


thanks for any help.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HA

2014-04-04 Thread Sander Grendelman
Do you have power management configured?
Was the failed host fenced/rebooted?


On Fri, Apr 4, 2014 at 2:21 PM, Koen Vanoppen vanoppen.k...@gmail.comwrote:

 So... It is possible for a fully automatic migration of the VM to another
 hypervisor in case Storage connection fails?
 How can we make this happen? Because for the moment, when we tested the
 situation they stayed in pause state.
 (Test situation:

- Unplug the 2 fibre cables from the hypervisor
- VM's go in pause state
- VM's stayed in pause state until the failure was solved

 )


 They only returned when we restored the fiber connection to the
 Hypervisor...

 Kind Regards,

 Koen



 2014-04-04 13:52 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com:

 So... It is possible for a fully automatic migration of the VM to another
 hypervisor in case Storage connection fails?
 How can we make this happen? Because for the moment, when we tested the
 situation they stayed in pause state.
 (Test situation:

- Unplug the 2 fibre cables from the hypervisor
- VM's go in pause state
- VM's stayed in pause state until the failure was solved

 )


 They only returned when we restored the fiber connection to the
 Hypervisor...

 Kind Regards,

 Koen


 2014-04-03 16:53 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com:

 -- Forwarded message --
 From: Doron Fediuck dfedi...@redhat.com
 Date: Apr 3, 2014 4:51 PM
 Subject: Re: [Users] HA
 To: Koen Vanoppen vanoppen.k...@gmail.com
 Cc: Omer Frenkel ofren...@redhat.com, users@ovirt.org, Federico
 Simoncelli fsimo...@redhat.com, Allon Mureinik amure...@redhat.com
 



 - Original Message -
  From: Koen Vanoppen vanoppen.k...@gmail.com
  To: Omer Frenkel ofren...@redhat.com, users@ovirt.org
  Sent: Wednesday, April 2, 2014 4:17:36 PM
  Subject: Re: [Users] HA
 
  Yes, indeed. I meant not-operational. Sorry.
  So, if I understand this correctly. When we ever come in a situation
 that we
  loose both storage connections on our hypervisor, we will have to
 manually
  restore the connections first?
 
  And thanx for the tip for speeding up thins :-).
 
  Kind regards,
 
  Koen
 
 
  2014-04-02 15:14 GMT+02:00 Omer Frenkel  ofren...@redhat.com  :
 
 
 
 
 
  - Original Message -
   From: Koen Vanoppen  vanoppen.k...@gmail.com 
   To: users@ovirt.org
   Sent: Wednesday, April 2, 2014 4:07:19 PM
   Subject: [Users] HA
  
   Dear All,
  
   Due our acceptance testing, we discovered something. (Document will
   follow).
   When we disable one fiber path, no problem multipath finds it way no
 pings
   are lost.
   BUT when we disabled both the fiber paths (so one of the storage
 domain is
   gone on this host, but still available on the other host), vms go in
 paused
   mode... He chooses a new SPM (can we speed this up?), put's the host
 in
   non-responsive (can we speed this up, more important) and the VM's
 stay on
   Paused mode... I would expect that they would be migrated (yes, HA is
 
  i guess you mean the host moves to not-operational (in contrast to
  non-responsive)?
  if so, the engine will not migrate vms that are paused to do io error,
  because of data corruption risk.
 
  to speed up you can look at the storage domain monitoring timeout:
  engine-config --get StorageDomainFalureTimeoutInMinutes
 
 
   enabled) to the other host and reboot there... Any solution? We are
 still
   using oVirt 3.3.1 , but we are planning a upgrade to 3.4 after the
 easter
   holiday.
  
   Kind Regards,
  
   Koen
  

 Hi Koen,
 Resuming from paused due to io issues is supported (adding relevant
 folks).
 Regardless, if you did not define power management, you should manually
 approve
 source host was rebooted in order for migration to proceed. Otherwise we
 risk
 split-brain scenario.

 Doron




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fail snapshot

2014-04-04 Thread Michal Skrivanek

On 4 Apr 2014, at 12:45, Kevin Tibi wrote:

 Hi,
 
 I have a pb when i try to snapshot a VM.

are you running the right qemu/libvirt from virt-preview repo?

 
 Ovirt engine self hosted 3.4. Two node (host01 and host02).
 
 my engine.log :
 
 2014-04-04 12:30:03,013 INFO  
 [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] 
 (org.ovirt.thread.pool-6-thread-24) Ending command successfully: 
 org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand
 2014-04-04 12:30:03,028 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
 (org.ovirt.thread.pool-6-thread-24) START, SnapshotVDSCommand(HostName = 
 host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, 
 vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977
 2014-04-04 12:30:03,075 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
 (org.ovirt.thread.pool-6-thread-24) Failed in SnapshotVDS method
 2014-04-04 12:30:03,076 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
 (org.ovirt.thread.pool-6-thread-24) Command 
 org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value
  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, 
 mMessage=Snapshot failed]]
 2014-04-04 12:30:03,077 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
 (org.ovirt.thread.pool-6-thread-24) HostName = host01
 2014-04-04 12:30:03,078 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
 (org.ovirt.thread.pool-6-thread-24) Command SnapshotVDSCommand(HostName = 
 host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, 
 vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1) execution failed. Exception: 
 VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
 SnapshotVDS, error = Snapshot failed, code = 48
 2014-04-04 12:30:03,080 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
 (org.ovirt.thread.pool-6-thread-24) FINISH, SnapshotVDSCommand, log id: 
 36463977
 2014-04-04 12:30:03,083 WARN  
 [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] 
 (org.ovirt.thread.pool-6-thread-24) Wasnt able to live snapshot due to error: 
 VdcBLLException: VdcBLLException: 
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = 
 Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). 
 VM will still be configured to the new created snapshot
 2014-04-04 12:30:03,097 INFO  
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
 (org.ovirt.thread.pool-6-thread-24) Correlation ID: 5650b99f, Job ID: 
 c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba, Call Stack: 
 org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: 
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = 
 Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
 
 
 My /var/log/messages
 
 Apr  4 12:30:04 host01 vdsm vm.Vm ERROR 
 vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: 
 {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 
 'volumeID': '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID': 
 '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}
 
 My /var/log/libvirt/libvirt.log
 
 2014-04-04 10:40:13.886+: 8234: debug : qemuMonitorIOWrite:462 : 
 QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 
 buf={execute:query-blockstats,id:libvirt-20842}
  len=53 ret=53 errno=11
 2014-04-04 10:40:13.888+: 8234: debug : qemuMonitorIOProcess:354 : 
 QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={return: [{device: 
 drive-ide0-1-0, parent: {stats: {flush_total_time_ns: 0, 
 wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0, 
 rd_total_time_ns: 0, flush_operations: 0, wr_operations: 0, rd_bytes: 
 0, rd_operations: 0}}, stats: {flush_total_time_ns: 0, 
 wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0, 
 rd_total_time_ns: 11929902, flush_operations: 0, wr_operations: 0, 
 rd_bytes: 135520, rd_operations: 46}}, {device: drive-virtio-disk0, 
 parent: {stats: {flush_total_time_ns: 0, wr_highest_offset: 
 22184332800, wr_total_time_ns: 0, wr_bytes: 0, rd_total_time_ns: 0, 
 flush_operations: 0, wr_operations: 0, rd_bytes: 0, rd_operations: 
 0}}, stats: {flush_total_time_ns: 34786515034, wr_highest_offset: 
 22184332800, wr_total_time_ns: 5131205369094, wr_bytes: 5122065408, 
 rd_tota
 l_time_ns: 12987633373, flush_operations: 285398, wr_operations: 401232, 
rd_bytes: 392342016, rd_operations: 15069}}], id: libvirt-20842}
  len=1021
 2014-04-04 10:40:13.888+: 8263: debug : qemuMonitorGetBlockStatsInfo:1478 
 : mon=0x7f77ec0ccce0 dev=ide0-1-0
 2014-04-04 10:40:13.889+: 8263: debug : qemuMonitorSend:904 : 
 QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 
 msg={execute:query-blockstats,id:libvirt-20843}
 
 /var/log/vdsm/vdsm.log
 Thread-4732::DEBUG::2014-04-04 
 

Re: [Users] Fail snapshot

2014-04-04 Thread Dafna Ron

is this a live snapshots (wile vm is running)?
can you please make sure your vdsm log is in debug and attach the full log?

Thanks,
Dafna


On 04/04/2014 02:23 PM, Michal Skrivanek wrote:

On 4 Apr 2014, at 12:45, Kevin Tibi wrote:


Hi,

I have a pb when i try to snapshot a VM.

are you running the right qemu/libvirt from virt-preview repo?


Ovirt engine self hosted 3.4. Two node (host01 and host02).

my engine.log :

2014-04-04 12:30:03,013 INFO  
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] 
(org.ovirt.thread.pool-6-thread-24) Ending command successfully: 
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand
2014-04-04 12:30:03,028 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-24) START, SnapshotVDSCommand(HostName = 
host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, 
vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-24) Failed in SnapshotVDS method
2014-04-04 12:30:03,076 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-24) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value
  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, 
mMessage=Snapshot failed]]
2014-04-04 12:30:03,077 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-24) Command SnapshotVDSCommand(HostName = 
host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, 
vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-04 12:30:03,080 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-24) FINISH, SnapshotVDSCommand, log id: 36463977
2014-04-04 12:30:03,083 WARN  
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] 
(org.ovirt.thread.pool-6-thread-24) Wasnt able to live snapshot due to error: 
VdcBLLException: VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot 
failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will 
still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-24) Correlation ID: 5650b99f, Job ID: 
c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba, Call Stack: 
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot 
failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)


My /var/log/messages

Apr  4 12:30:04 host01 vdsm vm.Vm ERROR 
vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: 
{'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 
'volumeID': '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID': 
'646df162-5c6d-44b1-bc47-b63c3fdab0e2'}

My /var/log/libvirt/libvirt.log

2014-04-04 10:40:13.886+: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 
buf={execute:query-blockstats,id:libvirt-20842}
  len=53 ret=53 errno=11
2014-04-04 10:40:13.888+: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={return: [{device: drive-ide0-1-0, parent: {stats: {flush_total_time_ns: 0, wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0, rd_total_time_ns: 0, flush_operations: 0, wr_operations: 0, rd_bytes: 0, 
rd_operations: 0}}, stats: {flush_total_time_ns: 0, wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0, rd_total_time_ns: 11929902, flush_operations: 0, wr_operations: 0, rd_bytes: 135520, rd_operations: 46}}, {device: drive-virtio-disk0, parent: {stats: {flush_total_time_ns: 0, 
wr_highest_offset: 22184332800, wr_total_time_ns: 0, wr_bytes: 0, rd_total_time_ns: 0, flush_operations: 0, wr_operations: 0, rd_bytes: 0, rd_operations: 0}}, stats: {flush_total_time_ns: 34786515034, wr_highest_offset: 22184332800, wr_total_time_ns: 5131205369094, wr_bytes: 5122065408, rd_tot

a

  l_time_ns: 12987633373, flush_operations: 285398, wr_operations: 401232, rd_bytes: 392342016, 
rd_operations: 15069}}], id: libvirt-20842}

  len=1021
2014-04-04 10:40:13.888+: 8263: debug : qemuMonitorGetBlockStatsInfo:1478 : 
mon=0x7f77ec0ccce0 dev=ide0-1-0
2014-04-04 10:40:13.889+: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 
msg={execute:query-blockstats,id:libvirt-20843}


Re: [Users] TSC clocksource gets lost after live migration

2014-04-04 Thread Michal Skrivanek
Hi,
this is more for the KVM folks I suppose…can you get the qemu process cmdline 
please?

Thanks,
michal

On 3 Apr 2014, at 12:13, Markus Stockhausen wrote:

 Hello,
 
 we have an up to date ovirt 3.4 installation. Inside we are running SLES11 SP3
 VMs (Kernel 3.0.76-0.11). After live migration of these VMs they all of a 
 sudden
 do not react any longer and CPU usage of the VM goes to 100%.
 
 We identified kvm-clock source to be the culprit and therefore switched to 
 another
 clocksource. We ended with hpet but are not happy with that as our inital goal
 was to use the more simple designed TSC clocksoure. 
 
 The reason behind that is the question I have for you experts.
 
 Our hosts all have the constant_tsc CPU flag available. Just to mention these
 are not identical hosts. We have a mix of Xeon 5500 and 5600 machines. E.G.
 [root@colovn01 ~]# cat /proc/cpuinfo | grep constant_tsc | wc -l
 8
 
 When we start the VM the client sees TSC as available clocksource:
 
 colvm53:~ # cat 
 /sys/devices/system/clocksource/clocksource0/available_clocksource
 kvm-clock tsc hpet acpi_pm
 
 After the first live migration to another host that also has constant_tsc 
 (see above)
 that flag is lost inside the VM.
 
 colvm53:~ # cat 
 /sys/devices/system/clocksource/clocksource0/available_clocksource
 kvm-clock hpet acpi_pm
 
 Any ideas?
 
 Markus
 
 
 InterScan_Disclaimer.txt___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] HA

2014-04-04 Thread Barak Azulay


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Koen Vanoppen vanoppen.k...@gmail.com, Doron Fediuck 
 dfedi...@redhat.com, users@ovirt.org
 Sent: Friday, April 4, 2014 3:27:07 PM
 Subject: Re: [Users] HA
 
 On 04/04/2014 03:21 PM, Koen Vanoppen wrote:
  So... It is possible for a fully automatic migration of the VM to
  another hypervisor in case Storage connection fails?
  How can we make this happen? Because for the moment, when we tested the
  situation they stayed in pause state.
  (Test situation:
 
* Unplug the 2 fibre cables from the hypervisor
* VM's go in pause state
* VM's stayed in pause state until the failure was solved
 
  )
 
 the KVM team advised this would be an unsafe migration. iirc, since IO
 can be stuck at kernel level, pending write to the storage, which would
 cause corruption if storage is recovered while the VM is now running on
 another machine.

correct,

Migration while the VM was paused due to EIO id deemed as unsafe and might lead 
to data corruption,

There is a feature that automatically resumes the VM ones storage connectivity 
was regained.

In addition you can manually fence the host (if you have fencing device 
configured) and than run the VM somewhere else (or you can define the vm as 
Highly available and the engine will run it again for you).

Anyway just to be on the safe side, I saw earlier in the thread a comment about 
host has been rebooted,
Do not use it unless you actually reboot the host.



 
 
 
  They only returned when we restored the fiber connection to the
  Hypervisor...
 
  Kind Regards,
 
  Koen
 
 
 
  2014-04-04 13:52 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com
  mailto:vanoppen.k...@gmail.com:
 
  So... It is possible for a fully automatic migration of the VM to
  another hypervisor in case Storage connection fails?
  How can we make this happen? Because for the moment, when we tested
  the situation they stayed in pause state.
  (Test situation:
 
* Unplug the 2 fibre cables from the hypervisor
* VM's go in pause state
* VM's stayed in pause state until the failure was solved
 
  )
 
 
  They only returned when we restored the fiber connection to the
  Hypervisor...
 
  Kind Regards,
 
  Koen
 
 
  2014-04-03 16:53 GMT+02:00 Koen Vanoppen vanoppen.k...@gmail.com
  mailto:vanoppen.k...@gmail.com:
 
  -- Forwarded message --
  From: Doron Fediuck dfedi...@redhat.com
  mailto:dfedi...@redhat.com
  Date: Apr 3, 2014 4:51 PM
  Subject: Re: [Users] HA
  To: Koen Vanoppen vanoppen.k...@gmail.com
  mailto:vanoppen.k...@gmail.com
  Cc: Omer Frenkel ofren...@redhat.com
  mailto:ofren...@redhat.com, users@ovirt.org
  mailto:users@ovirt.org, Federico Simoncelli
  fsimo...@redhat.com mailto:fsimo...@redhat.com, Allon
  Mureinik amure...@redhat.com mailto:amure...@redhat.com
 
 
 
  - Original Message -
From: Koen Vanoppen vanoppen.k...@gmail.com
  mailto:vanoppen.k...@gmail.com
To: Omer Frenkel ofren...@redhat.com
  mailto:ofren...@redhat.com, users@ovirt.org
  mailto:users@ovirt.org
Sent: Wednesday, April 2, 2014 4:17:36 PM
Subject: Re: [Users] HA
   
Yes, indeed. I meant not-operational. Sorry.
So, if I understand this correctly. When we ever come in a
  situation that we
loose both storage connections on our hypervisor, we will
  have to manually
restore the connections first?
   
And thanx for the tip for speeding up thins :-).
   
Kind regards,
   
Koen
   
   
2014-04-02 15:14 GMT+02:00 Omer Frenkel  ofren...@redhat.com
  mailto:ofren...@redhat.com  :
   
   
   
   
   
- Original Message -
 From: Koen Vanoppen  vanoppen.k...@gmail.com
  mailto:vanoppen.k...@gmail.com 
 To: users@ovirt.org mailto:users@ovirt.org
 Sent: Wednesday, April 2, 2014 4:07:19 PM
 Subject: [Users] HA

 Dear All,

 Due our acceptance testing, we discovered something.
  (Document will
 follow).
 When we disable one fiber path, no problem multipath finds
  it way no pings
 are lost.
 BUT when we disabled both the fiber paths (so one of the
  storage domain is
 gone on this host, but still available on the other host),
  vms go in paused
 mode... He chooses a new SPM (can we speed this up?), put's
  the host in
 non-responsive (can we speed this up, more important) and
  the VM's stay on
 Paused mode... I 

Re: [Users] oVirt 3.4.0 remove node problem

2014-04-04 Thread Itamar Heim

On 04/01/2014 09:38 PM, Laercio Motta wrote:

Hi all,

Upgrade my oVirt yesterday from 3.3.4 to 3.4.0..
For my surprise, the node of 3.3 not running in 3.4 (Cluster version)


shouldn't be a surprise. cluster version implies minimal level of hosts 
in the cluster.
when you say 'node', do you mean ovirt-node or a rhel/centos/fedora 
based host?


but how could you upgrade the cluster to 3.4, if the host is 3.3 and up? 
engine should have blocked this iirc?



Ok... move to maintenance and upgrade :P
But... Maintenance not is running for 10+ minutes.. (No live migration
progress)
I restart the ovirt-engine and live migration running and node is
maintenance mode.. (nice!!)
But.. (yes, again)... remove the node from ovirt engine not
is possible, this message error  in engine.log:
http://pastebin.com/CfTeHepv

PS: sorry for my english.. I'am Brazilian :P

[]'s

--
╔══╗
║▒▒▒Laercio da Silva Motta▒▒▒║
║--║
║*Blog: *http://www.laerciomotta.com/ ║
║*Twitter:* http://twitter.com/#!/laerciomasala
http://twitter.com/#%21/laerciomasala ║
║*Skype*: laerciomasala ║
║ Chave PGP: http://bit.ly/kXS6ga ║
╚═v1.0═╝


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fail snapshot

2014-04-04 Thread Kevin Tibi
Yes it's a live snapshots. Normal snapshot works.

How i make debug in vdsm ?

mom.conf :

log: /var/log/vdsm/mom.log

verbosity: info

vdsm.conf :

[root@host02 ~]# cat /etc/vdsm/vdsm.conf
[addresses]
management_port = 54321

[vars]
ssl = true



2014-04-04 15:27 GMT+02:00 Dafna Ron d...@redhat.com:

 is this a live snapshots (wile vm is running)?
 can you please make sure your vdsm log is in debug and attach the full log?

 Thanks,
 Dafna



 On 04/04/2014 02:23 PM, Michal Skrivanek wrote:

 On 4 Apr 2014, at 12:45, Kevin Tibi wrote:

  Hi,

 I have a pb when i try to snapshot a VM.

 are you running the right qemu/libvirt from virt-preview repo?

  Ovirt engine self hosted 3.4. Two node (host01 and host02).

 my engine.log :

 2014-04-04 12:30:03,013 INFO  [org.ovirt.engine.core.bll.
 CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24)
 Ending command successfully: org.ovirt.engine.core.bll.
 CreateAllSnapshotsFromVmCommand
 2014-04-04 12:30:03,028 INFO  [org.ovirt.engine.core.
 vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24)
 START, SnapshotVDSCommand(HostName = host01, HostId =
 fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, 
 vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1),
 log id: 36463977
 2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core.
 vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24)
 Failed in SnapshotVDS method
 2014-04-04 12:30:03,076 INFO  [org.ovirt.engine.core.
 vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24)
 Command org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand
 return value
   StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48,
 mMessage=Snapshot failed]]
 2014-04-04 12:30:03,077 INFO  [org.ovirt.engine.core.
 vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24)
 HostName = host01
 2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core.
 vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24)
 Command SnapshotVDSCommand(HostName = host01, HostId =
 fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, 
 vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1)
 execution failed. Exception: VDSErrorException: VDSGenericException:
 VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48
 2014-04-04 12:30:03,080 INFO  [org.ovirt.engine.core.
 vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24)
 FINISH, SnapshotVDSCommand, log id: 36463977
 2014-04-04 12:30:03,083 WARN  [org.ovirt.engine.core.bll.
 CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24)
 Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException:
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
 Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48).
 VM will still be configured to the new created snapshot
 2014-04-04 12:30:03,097 INFO  [org.ovirt.engine.core.dal.
 dbbroker.auditloghandling.AuditLogDirector] 
 (org.ovirt.thread.pool-6-thread-24)
 Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba,
 Call Stack: org.ovirt.engine.core.common.errors.VdcBLLException:
 VdcBLLException: 
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
 Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)


 My /var/log/messages

 Apr  4 12:30:04 host01 vdsm vm.Vm ERROR 
 vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The
 base volume doesn't exist: {'device': 'disk', 'domainID':
 '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID':
 '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID':
 '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}

 My /var/log/libvirt/libvirt.log

 2014-04-04 10:40:13.886+: 8234: debug : qemuMonitorIOWrite:462 :
 QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={execute:query-
 blockstats,id:libvirt-20842}
   len=53 ret=53 errno=11
 2014-04-04 10:40:13.888+: 8234: debug : qemuMonitorIOProcess:354 :
 QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={return: [{device:
 drive-ide0-1-0, parent: {stats: {flush_total_time_ns: 0,
 wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0,
 rd_total_time_ns: 0, flush_operations: 0, wr_operations: 0,
 rd_bytes: 0, rd_operations: 0}}, stats: {flush_total_time_ns: 0,
 wr_highest_offset: 0, wr_total_time_ns: 0, wr_bytes: 0,
 rd_total_time_ns: 11929902, flush_operations: 0, wr_operations: 0,
 rd_bytes: 135520, rd_operations: 46}}, {device: drive-virtio-disk0,
 parent: {stats: {flush_total_time_ns: 0, wr_highest_offset:
 22184332800, wr_total_time_ns: 0, wr_bytes: 0, rd_total_time_ns: 0,
 flush_operations: 0, wr_operations: 0, rd_bytes: 0, rd_operations:
 0}}, stats: {flush_total_time_ns: 34786515034, wr_highest_offset:
 22184332800, wr_total_time_ns: 5131205369094, wr_bytes: 5122065408,
 rd_tot

 a

   l_time_ns: 12987633373, flush_operations: 285398, wr_operations:
 401232, 

Re: [Users] ovirt 3.3 vnic profile with bonding vlans problem

2014-04-04 Thread Dan Kenigsberg
On Fri, Apr 04, 2014 at 11:25:15AM +, Sven Kieske wrote:
 Well I created BZs for ovirt
 and libvirt for this apparently
 almost completely broken feature:
 
 libvirt:
 https://bugzilla.redhat.com/show_bug.cgi?id=108
 ovirt:
 https://bugzilla.redhat.com/show_bug.cgi?id=1084448
 -- 
 Mit freundlichen Grüßen / Regards
 
 Sven Kieske
 
 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6

Nice to meet someone who works on my street (almost) ;-)
Thanks for reporting these bugs !

 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Error installing self hosted engine

2014-04-04 Thread Sandro Bonazzola
Il 01/04/2014 16:15, Sandro Bonazzola ha scritto:
 Il 01/04/2014 15:38, ovirt-t...@arcor.de ha scritto:
 Hello,

 I'm new to this list and I need help installing a self hosted engine

 I've installed CentOS 6.5 and oVirt 3.4. The following repositories are 
 enabled:
 yum localinstall http://resources.ovirt.org/releases/ovirt-release.noarch.rpm
 yum localinstall 
 http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
 yum localinstall 
 http://mirrors.dotsrc.org/jpackage/6.0/generic/free/RPMS/jpackage-release-6-3.jpp6.noarch.rpm

 Just wanted to check out the self hosted feature. But I get this error:

 # hosted-engine --deploy
 [ INFO  ] Stage: Initializing
   Continuing will configure this host for serving as hypervisor and 
 create a VM where you have to install oVirt Engine afterwards.
   Are you sure you want to continue? (Yes, No)[Yes]: 
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
   Configuration files: []
   Log file: 
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140401153028.log
   Version: otopi-1.2.0 (otopi-1.2.0-1.el6)
 [ INFO  ] Hardware supports virtualization
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup
 [ ERROR ] Failed to execute stage 'Environment setup': Fault 1: type 
 'exceptions.TypeError':cannot marshal None unless allow_none is enabled
 [ INFO  ] Stage: Clean up
 [ INFO  ] Stage: Pre-termination
 [ INFO  ] Stage: Termination

 It is not this error:
 http://lists.ovirt.org/pipermail/users/2014-March/022424.html

 In my logfile are the following errors:
 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
 plugin.executeRaw:366 execute: ('/sbin/service', 'vdsmd', 'status'), 
 executable='None', cwd='None', env=None
 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
 plugin.executeRaw:383 execute-result: ('/sbin/service', 'vdsmd', 'status'), 
 rc=0
 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
 plugin.execute:441 execute-output: ('/sbin/service', 'vdsmd', 'status') 
 stdout:
 VDS daemon server is running

 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
 plugin.execute:446 execute-output: ('/sbin/service', 'vdsmd', 'status') 
 stderr:


 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel rhel.status:147 
 service vdsmd status True
 2014-04-01 15:30:32 DEBUG otopi.context context._executeMethod:152 method 
 exception
 Traceback (most recent call last):
   File /usr/lib/python2.6/site-packages/otopi/context.py, line 142, in 
 _executeMethod
 method['method']()
   File 
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py,
  line 157, in _late_setup
 self._connect()
   File 
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py,
  line 78, in _connect
 hwinfo = serv.s.getVdsHardwareInfo()
   File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
 return self.__send(self.__name, args)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
 verbose=self.__verbose
   File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
 return self._parse_response(h.getfile(), sock)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1392, in _parse_response
 return u.close()
   File /usr/lib64/python2.6/xmlrpclib.py, line 838, in close
 raise Fault(**self._stack[0])
 Fault: Fault 1: type 'exceptions.TypeError':cannot marshal None unless 
 allow_none is enabled
 2014-04-01 15:30:32 ERROR otopi.context context._executeMethod:161 Failed to 
 execute stage 'Environment setup': Fault 1: type 
 'exceptions.TypeError':cannot marshal None unless allow_none is enabled
 2014-04-01 15:30:32 DEBUG otopi.context context.dumpEnvironment:468 
 ENVIRONMENT DUMP - BEGIN

Corresponding to the above call to getVdsHardwareInfo vdsm log shows:

Thread-22::DEBUG::2014-04-01 15:30:32,100::BindingXMLRPC::1067::vds::(wrapper) 
client [127.0.0.1]::call getHardwareInfo with () {}
Thread-22::DEBUG::2014-04-01 15:30:32,110::BindingXMLRPC::1074::vds::(wrapper) 
return getHardwareInfo with {'status': {'message': 'Done', 'code': 0},
'info': {'systemProductName': 'ProLiant DL380 G5', 'systemSerialNumber': 
'CZC6451JFR', 'systemFamily': None, 'systemVersion': 'Not Specified',
'systemUUID': '435a4336-3435-435a-4336-3435314a4652', 'systemManufacturer': 
'HP'}}

And corresponding supervdsm log:
MainProcess|Thread-22::DEBUG::2014-04-01 
15:30:32,109::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call 
getHardwareInfo with () {}
MainProcess|Thread-22::DEBUG::2014-04-01 
15:30:32,109::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper) return 
getHardwareInfo with
{'systemProductName': 'ProLiant DL380 G5', 'systemSerialNumber': 'CZC6451JFR', 
'systemFamily': None, 'systemVersion': 'Not Specified', 'systemUUID':

Re: [Users] Fail snapshot

2014-04-04 Thread Douglas Schilling Landgraf

Hi,

On 04/04/2014 10:04 AM, Kevin Tibi wrote:

Yes it's a live snapshots. Normal snapshot works.


Question:
Is it a EL6 hosts? If yes, are you using qemu-kvm from:
jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/  ?


Thanks!



How i make debug in vdsm ?

mom.conf :

log: /var/log/vdsm/mom.log

verbosity: info

vdsm.conf :

[root@host02 ~]# cat /etc/vdsm/vdsm.conf
[addresses]
management_port = 54321

[vars]
ssl = true



2014-04-04 15:27 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com:

is this a live snapshots (wile vm is running)?
can you please make sure your vdsm log is in debug and attach the
full log?

Thanks,
Dafna



On 04/04/2014 02:23 PM, Michal Skrivanek wrote:

On 4 Apr 2014, at 12:45, Kevin Tibi wrote:

Hi,

I have a pb when i try to snapshot a VM.

are you running the right qemu/libvirt from virt-preview repo?

Ovirt engine self hosted 3.4. Two node (host01 and host02).

my engine.log :

2014-04-04 12:30:03,013 INFO
  [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d] 
(org.ovirt.thread.pool-6-__thread-24) Ending command successfully: 
org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d
2014-04-04 12:30:03,028 INFO
  
[org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-__thread-24) START, SnapshotVDSCommand(HostName = 
host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81, 
vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR
[org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
(org.ovirt.thread.pool-6-__thread-24) Failed in SnapshotVDS
method
2014-04-04 12:30:03,076 INFO
  
[org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-__thread-24) Command 
org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand return value
   StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc
[mCode=48, mMessage=Snapshot failed]]
2014-04-04 12:30:03,077 INFO
  
[org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-__thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR
[org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
(org.ovirt.thread.pool-6-__thread-24) Command
SnapshotVDSCommand(HostName = host01, HostId =
fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81,
vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1) execution
failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to SnapshotVDS, error = Snapshot
failed, code = 48
2014-04-04 12:30:03,080 INFO
  
[org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-__thread-24) FINISH, SnapshotVDSCommand, log id: 
36463977
2014-04-04 12:30:03,083 WARN
  [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d] 
(org.ovirt.thread.pool-6-__thread-24) Wasnt able to live snapshot due to error: 
VdcBLLException: VdcBLLException: 
org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot 
failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will 
still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO
  
[org.ovirt.engine.core.dal.__dbbroker.auditloghandling.__AuditLogDirector] 
(org.ovirt.thread.pool-6-__thread-24) Correlation ID: 5650b99f, Job ID: 
c1b2d861-2a52-49f1-9eaa-__1b63aa8b4fba, Call Stack: 
org.ovirt.engine.core.common.__errors.VdcBLLException: VdcBLLException: 
org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot 
failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)


My /var/log/messages

Apr  4 12:30:04 host01 vdsm vm.Vm ERROR
vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID':
'5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID':
'3b6cbb5d-beed-428d-ac66-__9db3dd002e2f', 'imageID':
'646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}

My /var/log/libvirt/libvirt.log

2014-04-04 10:40:13.886+: 8234: debug :
qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE:
mon=0x7f77ec0ccce0
buf={execute:query-__blockstats,id:libvirt-__20842}
   len=53 ret=53 errno=11
2014-04-04 10:40:13.888+: 8234: debug :
qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS:

Re: [Users] Force certain VMs to be on different hosts

2014-04-04 Thread Joop

Gilad Chaplik wrote:
Hi Joop, 


You've created a positive enforcing affinity group - means that VMs should 
stay on the same host, no migration allowed while more than one VM is up.
What you're requesting is a useful RFE, some-kind of a 'follow me' positive 
affinity group.
I'm +1 on that :-) can you please open a formal request/RFE for it?
https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt

  

BZ opened: https://bugzilla.redhat.com/show_bug.cgi?id=1084518

Have read somewhere on the list that it should be possible to tag such a 
request as a FutureFeature but couldn't find it when creating the entry. 
Please feel free to move it and educate me!


Regards,

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fail snapshot

2014-04-04 Thread Kevin Tibi
It's centos 6.5. Have I need to change my repo ? I have just EPEL and Ovirt
repo.


2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf dougsl...@redhat.com
:

 Hi,


 On 04/04/2014 10:04 AM, Kevin Tibi wrote:

 Yes it's a live snapshots. Normal snapshot works.


 Question:
 Is it a EL6 hosts? If yes, are you using qemu-kvm from:
 jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/  ?


 Thanks!


 How i make debug in vdsm ?

 mom.conf :

 log: /var/log/vdsm/mom.log

 verbosity: info

 vdsm.conf :

 [root@host02 ~]# cat /etc/vdsm/vdsm.conf
 [addresses]
 management_port = 54321

 [vars]
 ssl = true



 2014-04-04 15:27 GMT+02:00 Dafna Ron d...@redhat.com
 mailto:d...@redhat.com:


 is this a live snapshots (wile vm is running)?
 can you please make sure your vdsm log is in debug and attach the
 full log?

 Thanks,
 Dafna



 On 04/04/2014 02:23 PM, Michal Skrivanek wrote:

 On 4 Apr 2014, at 12:45, Kevin Tibi wrote:

 Hi,

 I have a pb when i try to snapshot a VM.

 are you running the right qemu/libvirt from virt-preview repo?

 Ovirt engine self hosted 3.4. Two node (host01 and host02).

 my engine.log :

 2014-04-04 12:30:03,013 INFO
   [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d]
 (org.ovirt.thread.pool-6-__thread-24) Ending command successfully:
 org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d

 2014-04-04 12:30:03,028 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) START, SnapshotVDSCommand(HostName
 = host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81,
 vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1), log id: 36463977

 2014-04-04 12:30:03,075 ERROR
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__
 SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) Failed in SnapshotVDS

 method
 2014-04-04 12:30:03,076 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) Command org.ovirt.engine.core.__
 vdsbroker.vdsbroker.__SnapshotVDSCommand return value

StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc
 [mCode=48, mMessage=Snapshot failed]]
 2014-04-04 12:30:03,077 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) HostName = host01

 2014-04-04 12:30:03,078 ERROR
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__
 SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) Command

 SnapshotVDSCommand(HostName = host01, HostId =
 fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81,
 vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1) execution

 failed. Exception: VDSErrorException: VDSGenericException:
 VDSErrorException: Failed to SnapshotVDS, error = Snapshot
 failed, code = 48
 2014-04-04 12:30:03,080 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) FINISH, SnapshotVDSCommand, log
 id: 36463977

 2014-04-04 12:30:03,083 WARN
   [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d]
 (org.ovirt.thread.pool-6-__thread-24) Wasnt able to live snapshot due to
 error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.__
 vdsbroker.vdsbroker.__VDSErrorException: VDSGenericException:
 VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code =
 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be
 configured to the new created snapshot

 2014-04-04 12:30:03,097 INFO
   
 [org.ovirt.engine.core.dal.__dbbroker.auditloghandling.__AuditLogDirector]
 (org.ovirt.thread.pool-6-__thread-24) Correlation ID: 5650b99f, Job ID:
 c1b2d861-2a52-49f1-9eaa-__1b63aa8b4fba, Call Stack:
 org.ovirt.engine.core.common.__errors.VdcBLLException: VdcBLLException:
 org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
 Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)



 My /var/log/messages

 Apr  4 12:30:04 host01 vdsm vm.Vm ERROR
 vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base

 volume doesn't exist: {'device': 'disk', 'domainID':
 '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID':
 '3b6cbb5d-beed-428d-ac66-__9db3dd002e2f', 'imageID':
 '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}


 My /var/log/libvirt/libvirt.log

 2014-04-04 10:40:13.886+: 8234: debug :
 qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE:
 

Re: [Users] Fail snapshot

2014-04-04 Thread Kevin Tibi
Installed Packages
qemu-kvm.x86_642:0.12.1.2-2.415.el6_5.6
@updates
Available Packages
qemu-kvm.x86_642:0.12.1.2-2.415.el6_5.7
updates
[


2014-04-04 17:06 GMT+02:00 Kevin Tibi kevint...@hotmail.com:

 It's centos 6.5. Have I need to change my repo ? I have just EPEL and
 Ovirt repo.


 2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf 
 dougsl...@redhat.com:

 Hi,


 On 04/04/2014 10:04 AM, Kevin Tibi wrote:

 Yes it's a live snapshots. Normal snapshot works.


 Question:
 Is it a EL6 hosts? If yes, are you using qemu-kvm from:
 jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/  ?


 Thanks!


 How i make debug in vdsm ?

 mom.conf :

 log: /var/log/vdsm/mom.log

 verbosity: info

 vdsm.conf :

 [root@host02 ~]# cat /etc/vdsm/vdsm.conf
 [addresses]
 management_port = 54321

 [vars]
 ssl = true



 2014-04-04 15:27 GMT+02:00 Dafna Ron d...@redhat.com
 mailto:d...@redhat.com:


 is this a live snapshots (wile vm is running)?
 can you please make sure your vdsm log is in debug and attach the
 full log?

 Thanks,
 Dafna



 On 04/04/2014 02:23 PM, Michal Skrivanek wrote:

 On 4 Apr 2014, at 12:45, Kevin Tibi wrote:

 Hi,

 I have a pb when i try to snapshot a VM.

 are you running the right qemu/libvirt from virt-preview repo?

 Ovirt engine self hosted 3.4. Two node (host01 and host02).

 my engine.log :

 2014-04-04 12:30:03,013 INFO
   [org.ovirt.engine.core.bll.__
 CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24)
 Ending command successfully: org.ovirt.engine.core.bll.__
 CreateAllSnapshotsFromVmComman__d

 2014-04-04 12:30:03,028 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) START,
 SnapshotVDSCommand(HostName = host01, HostId = 
 fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81,
 vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1), log id: 36463977

 2014-04-04 12:30:03,075 ERROR
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__
 SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) Failed in SnapshotVDS

 method
 2014-04-04 12:30:03,076 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) Command org.ovirt.engine.core.__
 vdsbroker.vdsbroker.__SnapshotVDSCommand return value

StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc
 [mCode=48, mMessage=Snapshot failed]]
 2014-04-04 12:30:03,077 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) HostName = host01

 2014-04-04 12:30:03,078 ERROR
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__
 SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) Command

 SnapshotVDSCommand(HostName = host01, HostId =
 fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81,
 vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1) execution

 failed. Exception: VDSErrorException: VDSGenericException:
 VDSErrorException: Failed to SnapshotVDS, error = Snapshot
 failed, code = 48
 2014-04-04 12:30:03,080 INFO
   
 [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-__thread-24) FINISH, SnapshotVDSCommand, log
 id: 36463977

 2014-04-04 12:30:03,083 WARN
   [org.ovirt.engine.core.bll.__
 CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24)
 Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException:
 org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
 Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48).
 VM will still be configured to the new created snapshot

 2014-04-04 12:30:03,097 INFO
   
 [org.ovirt.engine.core.dal.__dbbroker.auditloghandling.__AuditLogDirector]
 (org.ovirt.thread.pool-6-__thread-24) Correlation ID: 5650b99f, Job ID:
 c1b2d861-2a52-49f1-9eaa-__1b63aa8b4fba, Call Stack:
 org.ovirt.engine.core.common.__errors.VdcBLLException: VdcBLLException:
 org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
 Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)



 My /var/log/messages

 Apr  4 12:30:04 host01 vdsm vm.Vm ERROR
 vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base

 volume doesn't exist: {'device': 'disk', 'domainID':
 

Re: [Users] Problem with Node in oVirt 3.4.0

2014-04-04 Thread Douglas Schilling Landgraf

On 04/02/2014 10:43 AM, Laercio Motta wrote:

Hi all,
I have problem to aprove node in manager..
Use: ovirt-node-iso-3.0.4-1.0.201401291204.vdsm34.el6.iso
oVirt Version: 3.4.0 GA
The Problem:
https://cloud.pti.org.br/public.php?service=filest=41c19c0705df17f793210043c7eb9ec8

Left config of cluster.. Right Sun ILOM with ovirt node ... Show infos:
https://cloud.pti.org.br/public.php?service=filest=c63a75a34cc07f68cae3b0e5e2a9f3d0

Any ideas???



For the record, the below patch from Ravi should help.
http://gerrit.ovirt.org/#/c/26409/

--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fail snapshot

2014-04-04 Thread Itamar Heim

On 04/04/2014 06:11 PM, Kevin Tibi wrote:

Installed Packages
qemu-kvm.x86_64
  2:0.12.1.2-2.415.el6_5.6 @updates
Available Packages
qemu-kvm.x86_64
  2:0.12.1.2-2.415.el6_5.7 updates


until we resolve this with centos, you need qemu-kvm-rhev.
we are currently providing it here:
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create_rpms_el6/lastSuccessfulBuild/artifact/rpms/


[


2014-04-04 17:06 GMT+02:00 Kevin Tibi kevint...@hotmail.com
mailto:kevint...@hotmail.com:

It's centos 6.5. Have I need to change my repo ? I have just EPEL
and Ovirt repo.


2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf
dougsl...@redhat.com mailto:dougsl...@redhat.com:

Hi,


On 04/04/2014 10:04 AM, Kevin Tibi wrote:

Yes it's a live snapshots. Normal snapshot works.


Question:
Is it a EL6 hosts? If yes, are you using qemu-kvm from:
jenkins.ovirt.org/view/__Packaging/job/qemu-kvm-rhev___create_rpms_el6/

http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/
  ?


Thanks!


How i make debug in vdsm ?

mom.conf :

log: /var/log/vdsm/mom.log

verbosity: info

vdsm.conf :

[root@host02 ~]# cat /etc/vdsm/vdsm.conf
[addresses]
management_port = 54321

[vars]
ssl = true



2014-04-04 15:27 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com:


 is this a live snapshots (wile vm is running)?
 can you please make sure your vdsm log is in debug and
attach the
 full log?

 Thanks,
 Dafna



 On 04/04/2014 02:23 PM, Michal Skrivanek wrote:

 On 4 Apr 2014, at 12:45, Kevin Tibi wrote:

 Hi,

 I have a pb when i try to snapshot a VM.

 are you running the right qemu/libvirt from
virt-preview repo?

 Ovirt engine self hosted 3.4. Two node (host01
and host02).

 my engine.log :

 2014-04-04 12:30:03,013 INFO

[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-6-thread-24) Ending command
successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand

 2014-04-04 12:30:03,028 INFO


[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) START,
SnapshotVDSCommand(HostName = host01, HostId =
fcb9a5cf-2064-42a5-99fe-dc56ea39ed81,
vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977

 2014-04-04 12:30:03,075 ERROR


[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) Failed
in SnapshotVDS

 method
 2014-04-04 12:30:03,076 INFO


[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand
return value

StatusOnlyReturnForXmlRpc
[mStatus=StatusForXmlRpc
 [mCode=48, mMessage=Snapshot failed]]
 2014-04-04 12:30:03,077 INFO


[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) HostName = host01

 2014-04-04 12:30:03,078 ERROR


[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) Command

 SnapshotVDSCommand(HostName = host01, HostId =
 fcb9a5cf-2064-42a5-99fe-dc56ea39ed81,
 vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1)
execution

 failed. Exception: VDSErrorException:
VDSGenericException:
 VDSErrorException: Failed to SnapshotVDS, error
= Snapshot
 failed, code = 48
 2014-04-04 12:30:03,080 INFO


[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-24) FINISH,
SnapshotVDSCommand, log id: 36463977

 2014-04-04 12:30:03,083 WARN


Re: [Users] oVirt 3.4.0 remove node problem

2014-04-04 Thread Laercio Motta
Np, in this case is a CentOS based host...
The problem in this case is a record in the database that was crashing the
host.
In the table vm_dynamic contained a reference of node (vds_id, see in the
paste line 6). http://pastebin.com/CfTeHepv
Using the UPDATE Postgres command to set to NULL fields that contain
references to the Centos based node, vds_id = 6f90baca-9eb7-48fd-AD44-
e7d24ef147a8
After that it was possible to remove the node =]]


2014-04-04 10:43 GMT-03:00 Itamar Heim ih...@redhat.com:

 On 04/01/2014 09:38 PM, Laercio Motta wrote:

 Hi all,

 Upgrade my oVirt yesterday from 3.3.4 to 3.4.0..
 For my surprise, the node of 3.3 not running in 3.4 (Cluster version)


 shouldn't be a surprise. cluster version implies minimal level of hosts in
 the cluster.
 when you say 'node', do you mean ovirt-node or a rhel/centos/fedora based
 host?

 but how could you upgrade the cluster to 3.4, if the host is 3.3 and up?
 engine should have blocked this iirc?

  Ok... move to maintenance and upgrade :P
 But... Maintenance not is running for 10+ minutes.. (No live migration
 progress)
 I restart the ovirt-engine and live migration running and node is
 maintenance mode.. (nice!!)
 But.. (yes, again)... remove the node from ovirt engine not
 is possible, this message error  in engine.log:
 http://pastebin.com/CfTeHepv

 PS: sorry for my english.. I'am Brazilian :P

 []'s

 --
 ╔══╗
 ║▒▒▒Laercio da Silva Motta▒▒▒║
 ║--║
 ║*Blog: *http://www.laerciomotta.com/ ║
 ║*Twitter:* http://twitter.com/#!/laerciomasala
 http://twitter.com/#%21/laerciomasala ║
 ║*Skype*: laerciomasala ║

 ║ Chave PGP: http://bit.ly/kXS6ga ║
 ╚═v1.0═╝


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users





-- 
╔══╗
║▒▒▒ Laercio da Silva Motta ▒▒▒║
║--║
║* Blog: *http://www.laerciomotta.com/   ║
║ *Twitter:* http://twitter.com/#!/laerciomasala ║
║ *Skype*: laerciomasala ║
║ Chave PGP: http://bit.ly/kXS6ga  ║
╚═v1.0═╝
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt-guest-agent on debian?

2014-04-04 Thread Itamar Heim

On 04/01/2014 10:48 AM, René Koch wrote:

On 04/01/2014 09:18 AM, Sven Kieske wrote:

Hi,

well I can use the guest agent for ubuntu 12.04 (precise)
really well on a debian 7 64 bit.


I can confirm this - precise package works fine on Debian.


so can someone please close the gap of building/publishing for Debian?






I didn't encounter any issues so far.

you just need to adjust the apt.sources.list

HTH


Am 31.03.2014 23:05, schrieb Boudewijn Ector:

Hi Guys


I was wondering wether anybody has the guest agent already working on
debian guests. In m previous install I had a working setup but currently
I only see Ubuntu packages.

When using these ubuntu packages I really run quite fast into trouble
due to the highly integrated nature of upstart in ubuntu.
Has anybody got a suggestion for a debian package?

Cheers,

Boudewijn




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] live storage migration - when is this being targeted?

2014-04-04 Thread Itamar Heim

On 03/31/2014 07:11 PM, Federico Alberto Sayd wrote:

On 31/03/14 04:17, Paul Jansen wrote:

From what I can understand ovirt 3.4.0 - or a least the hypervisor
part based on el6 - cannot do live storage migration due to an older
qemu-kvm package.


AFAIK isn't a old qemu-kvm package but a exclusive rhev package with the
live storage migration flags activated in the rpm build spec.

I could activate live storage migration in 3.4 using the packages of
this 3rd party repo: http://www.dreyou.org/ovirt/vdsm/Packages/

I manually installed (rpm -i) qemu-img-rhev and qemu-kvm-rhev in each
node. I didn't need to update vdsm.

Yes I know that is not the most elegant and professional way, but I need
live storage migration.

My nodes run  Centos 6.5 and vdsm 4.13.3

I don't know why downstream includes this feature while upstream still
don't have this. Sounds as political and commercial reasons from RH.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



upstream in this case is CentOS, and it provides what RHEL provides, 
which is without these features.

we are working with CentOS to provide this.
until then, we are building it nightly from CentOS sources here:
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create_rpms_el6/lastSuccessfulBuild/artifact/rpms/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Processor Type

2014-04-04 Thread Itamar Heim

On 03/31/2014 06:57 PM, Robert Story wrote:

On Wed, 19 Mar 2014 10:13:28 -0400 (EDT) Omer wrote:
OF cpu name is cluster level, and usually the lowest common denominator of
OF the hosts in cluster, to make sure migration works in the cluster.

Is there and documentation on the relationship between cpu types and
compatibility? Right now I have a cluster per cpu type, and it might make
sense to merge some of them if the performance hit was minimal.


that would be a question for qemu-kvm - they provide the cpu models we 
base this on.
you can check which cpu flags you get exposed in the guest of the higher 
cpu model, then check if you think your applications would care about 
them...


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Error installing self hosted engine

2014-04-04 Thread Dan Kenigsberg
On Fri, Apr 04, 2014 at 04:28:29PM +0200, Sandro Bonazzola wrote:
 Il 01/04/2014 16:15, Sandro Bonazzola ha scritto:
  Il 01/04/2014 15:38, ovirt-t...@arcor.de ha scritto:
  Hello,
 
  I'm new to this list and I need help installing a self hosted engine
 
  I've installed CentOS 6.5 and oVirt 3.4. The following repositories are 
  enabled:
  yum localinstall 
  http://resources.ovirt.org/releases/ovirt-release.noarch.rpm
  yum localinstall 
  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
  yum localinstall 
  http://mirrors.dotsrc.org/jpackage/6.0/generic/free/RPMS/jpackage-release-6-3.jpp6.noarch.rpm
 
  Just wanted to check out the self hosted feature. But I get this error:
 
  # hosted-engine --deploy
  [ INFO  ] Stage: Initializing
Continuing will configure this host for serving as hypervisor 
  and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: 
  [ INFO  ] Generating a temporary VNC password.
  [ INFO  ] Stage: Environment setup
Configuration files: []
Log file: 
  /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140401153028.log
Version: otopi-1.2.0 (otopi-1.2.0-1.el6)
  [ INFO  ] Hardware supports virtualization
  [ INFO  ] Stage: Environment packages setup
  [ INFO  ] Stage: Programs detection
  [ INFO  ] Stage: Environment setup
  [ ERROR ] Failed to execute stage 'Environment setup': Fault 1: type 
  'exceptions.TypeError':cannot marshal None unless allow_none is enabled
  [ INFO  ] Stage: Clean up
  [ INFO  ] Stage: Pre-termination
  [ INFO  ] Stage: Termination
 
  It is not this error:
  http://lists.ovirt.org/pipermail/users/2014-March/022424.html
 
  In my logfile are the following errors:
  2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
  plugin.executeRaw:366 execute: ('/sbin/service', 'vdsmd', 'status'), 
  executable='None', cwd='None', env=None
  2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
  plugin.executeRaw:383 execute-result: ('/sbin/service', 'vdsmd', 
  'status'), rc=0
  2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
  plugin.execute:441 execute-output: ('/sbin/service', 'vdsmd', 'status') 
  stdout:
  VDS daemon server is running
 
  2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
  plugin.execute:446 execute-output: ('/sbin/service', 'vdsmd', 'status') 
  stderr:
 
 
  2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel 
  rhel.status:147 service vdsmd status True
  2014-04-01 15:30:32 DEBUG otopi.context context._executeMethod:152 method 
  exception
  Traceback (most recent call last):
File /usr/lib/python2.6/site-packages/otopi/context.py, line 142, in 
  _executeMethod
  method['method']()
File 
  /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py,
   line 157, in _late_setup
  self._connect()
File 
  /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py,
   line 78, in _connect
  hwinfo = serv.s.getVdsHardwareInfo()
File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
  return self.__send(self.__name, args)
File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
  verbose=self.__verbose
File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
  return self._parse_response(h.getfile(), sock)
File /usr/lib64/python2.6/xmlrpclib.py, line 1392, in _parse_response
  return u.close()
File /usr/lib64/python2.6/xmlrpclib.py, line 838, in close
  raise Fault(**self._stack[0])
  Fault: Fault 1: type 'exceptions.TypeError':cannot marshal None unless 
  allow_none is enabled
  2014-04-01 15:30:32 ERROR otopi.context context._executeMethod:161 Failed 
  to execute stage 'Environment setup': Fault 1: type 
  'exceptions.TypeError':cannot marshal None unless allow_none is enabled
  2014-04-01 15:30:32 DEBUG otopi.context context.dumpEnvironment:468 
  ENVIRONMENT DUMP - BEGIN
 
 Corresponding to the above call to getVdsHardwareInfo vdsm log shows:
 
 Thread-22::DEBUG::2014-04-01 
 15:30:32,100::BindingXMLRPC::1067::vds::(wrapper) client [127.0.0.1]::call 
 getHardwareInfo with () {}
 Thread-22::DEBUG::2014-04-01 
 15:30:32,110::BindingXMLRPC::1074::vds::(wrapper) return getHardwareInfo with 
 {'status': {'message': 'Done', 'code': 0},
 'info': {'systemProductName': 'ProLiant DL380 G5', 'systemSerialNumber': 
 'CZC6451JFR',

That's the culprit:

 'systemFamily': None,

Vdsm must never return None since it cannot be marshaled over standard
xmlrpc.

Please open an Vdsm bug on this!

 'systemVersion': 'Not Specified',
 'systemUUID': '435a4336-3435-435a-4336-3435314a4652', 'systemManufacturer': 
 'HP'}}
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt March 2014 Updates

2014-04-04 Thread Itamar Heim

*** Highlights ***

- oVirt 3.4 was released!
  It has been a pleasure seeing the uptake of oVirt this year,
  culminating with the amount of traffic around 3.4 GA.
  Kudos and Thanks to everyone!

  http://lists.ovirt.org/pipermail/announce/2014-March/98.html
  http://www.ovirt.org/OVirt_3.4_Release_Announcement
  http://www.ovirt.org/OVirt_3.4_Release_Notes

  (more on 3.4 below)

- oVirt 3.3.4 Released.

- Great to see some of the oVirt 3.5 discussions picking up. feature
  pages are sent for reviews / review sessions scheduled.

- Docker UI plugin for oVirt by Oved Ourfali
  http://alturl.com/r9925
  http://gerrit.ovirt.org/25814

- engine-devel and arch mailing list unified to de...@ovirt.org
  (and vdsm will try to track it as well)

- Cool gamification progress bar plugin by Vojtech
  http://www.ovirt.org/Gamification

- glance.ovirt.org is up and running Thanks to Oved
  (and available by default in 3.4)
  http://alturl.com/swv2x

- IT Novum case study published
  ... with nearly 1,100 VMware virtual machines successfully migrated
  to oVirt... all on 960 CPU cores and 7680 GB of RAM...
  http://www.ovirt.org/IT_Novum_case_study
  http://www.it-novum.com/en/ovirt-en.html

- Alon Bar-Lev updated Experimental Gentoo overlay is available
  https://github.com/alonbl/ovirt-overlay
  https://wiki.gentoo.org/wiki/OVirt

*** oVirt 3.4 Updates ***

- Up and Running with oVirt 3.4 by Jason Brooks
  (Hosted Engine, glance.ovirt.org, etc)
  http://alturl.com/mjvrq

- oVirt 3.4 provided the ability hosted engine and early support for
  PPC64 (Russian)
  http://tinyurl.com/ovp5zg5

- Free management software oVirt 3.4 with extended memory management
  (German)
  http://heise.de/-2158871

- Version 3.4 of oVirt management is virtualized (German)
  http://alturl.com/qi5kp

- Download page refreshed, thanks to Brian Proffitt and others
  http://www.ovirt.org/Download

*** Future Conferences ***

- oVirt will be represented at the Summit Developer Lounge at Red Hat
  Summit by bkp, and ovedo will be speaking at DevNation

- bkp ill have a session in LinuxFest NorthWest on April 26-27

- In addition to two sessions at FISL (Brazil), we get to do an oVirt
  community update session. There will also be oVirt discussions at a
  Red Hat dojo in Sao Paulo before FISL
  http://alturl.com/9w67g

- Rene Koch is organizing two oVirt workshops in Austria: Grazer
  Linuxtage in Graz April 4-5 and Linuxwochen Vienna, May 8-10.
  http://lists.ovirt.org/pipermail/users/2014-March/022797.html

*** Past Conferences ***

- Adam Litke pitched oVirt at the Linux Collaboration Summit
  http://alturl.com/3p6ro
  http://alturl.com/93452

*** Other ***

- Kimchi 1.2 released
  http://miud.in/1F5K

- New Foreman plugin with support for snapshots on oVirt
  http://is.gd/1UkLLS

- Deploying oVirt 3.3 by Arnaud (French)
  http://www.it-connect.fr/ovirt-partie-1/

- iordanov published Opaque (Android Spice client) v1.2.2 Beta can now
  record audio using the microphone
  https://plus.google.com/111635076890324817164/posts/Bu1FaaoxGw8

- Extending oVirt/Red Hat Enterprise Virtualization with Vdsm hooks
  http://developerblog.redhat.com/2014/02/25/extending-rhev-vdsm-hooks/

- HP sending detailed feature pages for NUMA support, slated for 3.5.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] TSC clocksource gets lost after live migration

2014-04-04 Thread Darrell Budic
I see this on some guests as well, possibly relating to moving between hosts 
with the same family of CPU but different absolute CPU speeds?

  -Darrell

On Apr 4, 2014, at 8:33 AM, Michal Skrivanek michal.skriva...@redhat.com 
wrote:

 Hi,
 this is more for the KVM folks I suppose…can you get the qemu process cmdline 
 please?
 
 Thanks,
 michal
 
 On 3 Apr 2014, at 12:13, Markus Stockhausen wrote:
 
 Hello,
 
 we have an up to date ovirt 3.4 installation. Inside we are running SLES11 
 SP3
 VMs (Kernel 3.0.76-0.11). After live migration of these VMs they all of a 
 sudden
 do not react any longer and CPU usage of the VM goes to 100%.
 
 We identified kvm-clock source to be the culprit and therefore switched to 
 another
 clocksource. We ended with hpet but are not happy with that as our inital 
 goal
 was to use the more simple designed TSC clocksoure. 
 
 The reason behind that is the question I have for you experts.
 
 Our hosts all have the constant_tsc CPU flag available. Just to mention these
 are not identical hosts. We have a mix of Xeon 5500 and 5600 machines. E.G.
 [root@colovn01 ~]# cat /proc/cpuinfo | grep constant_tsc | wc -l
 8
 
 When we start the VM the client sees TSC as available clocksource:
 
 colvm53:~ # cat 
 /sys/devices/system/clocksource/clocksource0/available_clocksource
 kvm-clock tsc hpet acpi_pm
 
 After the first live migration to another host that also has constant_tsc 
 (see above)
 that flag is lost inside the VM.
 
 colvm53:~ # cat 
 /sys/devices/system/clocksource/clocksource0/available_clocksource
 kvm-clock hpet acpi_pm
 
 Any ideas?
 
 Markus
 
 
 InterScan_Disclaimer.txt___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] inconsistent sign-on, inacurate cpu % -- new to ovirt, new installation, new vm, first vm

2014-04-04 Thread Jeff Clay
I've noticed something inconsistent. When viewing the the console using
virtviewer on windows, I open the .vv file and sometimes the display
connection prompts me for a password, no user name, only password.
Regardless of what password I enter, it isn't accepted. Sometimes, I don't
get prompted for a password at all. Keep in mind, this is the only VM on
this machine, it's not like I'm having different issues on different
machines. Also, if it's relevant, I have SSO disabled for this VM.

Another issue, this vm has Windows 7 32-bit installed, configured for 2gb
RAM, 1 socket 2 cores, yet when doing windows updates, the system resources
shows the processor at constantly peaking to 100% on both graphs (both
cores); yet, the CPU utilization in the ovirt webui doesn't go above 50%,
almost like 50% = 100%.

Any suggestions?

Thanks in advance. I'm sure I'm going to be bugging everyone with a lot of
questions as I dig further into this.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users