Re: [Users] [libvirt] Starting VM Error

2013-09-22 Thread Gianluca Cecchi
On Thu, Sep 19, 2013 at 7:50 AM, Gianluca Cecchi wrote:

 Just to notice that what worked for me online, without need of reboot
 of my all-in one f18 system was

 take notice of packages to downgrade, just for reference:
 rpm -qa | grep libvirt|grep 0.10.2.7-1

 mkdir libvirt_downgrade
 cd libvirt_downgrade

 LIST=$(rpm -qa | grep libvirt|grep 0.10.2.7-1 | sed 
 s/0.10.2.7-1/0.10.2.6-1/)
 SITE=http://kojipkgs.fedoraproject.org/packages/libvirt/0.10.2.6/1.fc18/x86_64/;
 (note that it is indeed a / in 0.10.2.6/1 and not a -)

 for pack in $LIST
 do
 wget ${SITE}${pack}.rpm
 done

 sudo rpm -Uvh --force *rpm


 connect to webadmin and start one of my WinXP VMs -- ok

 Gianluca


I've put a note on bugzilla too
https://bugzilla.redhat.com/show_bug.cgi?id=1006394

I previously reverted to 0.10.2.6-1 and oVirt worked again.

Now it works with 0.10.2.8-1 in updates-testing repo:
[g.cecchi@tekkaman ~]$ sudo yum --enablerepo=updates-testing update libvirt
Updated:
  libvirt.x86_64 0:0.10.2.8-1.fc18

Dependency Updated:
  libvirt-client.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon.x86_64 0:0.10.2.8-1.fc18
  libvirt-daemon-config-network.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon-config-nwfilter.x86_64 0:0.10.2.8-1.fc18
  libvirt-daemon-driver-interface.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon-driver-libxl.x86_64 0:0.10.2.8-1.fc18
  libvirt-daemon-driver-lxc.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon-driver-network.x86_64 0:0.10.2.8-1.fc18
  libvirt-daemon-driver-nodedev.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon-driver-nwfilter.x86_64 0:0.10.2.8-1.fc18
  libvirt-daemon-driver-qemu.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon-driver-secret.x86_64 0:0.10.2.8-1.fc18
  libvirt-daemon-driver-storage.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon-driver-uml.x86_64 0:0.10.2.8-1.fc18
  libvirt-daemon-driver-xen.x86_64 0:0.10.2.8-1.fc18
libvirt-daemon-kvm.x86_64 0:0.10.2.8-1.fc18
  libvirt-lock-sanlock.x86_64 0:0.10.2.8-1.fc18
libvirt-python.x86_64 0:0.10.2.8-1.fc18
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] cant add storage connection via api

2013-09-22 Thread Michael Pasternak

Hi Yuriy,

please see the correct way of adding new storageconnection at [1], under
link href=/api/storageconnections rel=add section,

also please let us know if you find any inconsistency between the wiki
and rsdl,

regarding the UnmarshalException, i'm not sure this feature got in to the
version you're using,

Ofer, in what release mentioned feature is available?

thanks.

[1] https://ovirt.spb.stone.local/api?rsdl

On 09/18/2013 03:21 PM, Yuriy Demchenko wrote:
 Hi,
 
 I've recently upgraded my test lab to ovirt-3.3 (el6) and trying to add 
 additional target for iscsi domain.
 As described here - http://www.ovirt.org/Features/Manage_Storage_Connections 
 , I'm trying first to add new connection via restapi, but operation fails 
 with error HTTP
 Status 400 - javax.xml.bind.UnmarshalException: unexpected element (uri:, 
 local:storage_connection)
 I'm not very familiar with restapi and maybe doing something wrong, so please 
 help me to figure it out.
 
 here's what i put and reply from server:
 curl -k -v -u admin@internal:pass -H Content-type: application/xml -d 
 'storage_connection
 typeiscsi/type
 address192.168.221.5/address
 port3260/port
 targetiqn.2013-09.local.stone.spb:target3.disk/target
 /storage_connection' 'https://ovirt.spb.stone.local/api/storageconnections'
 * About to connect() to ovirt.spb.stone.local port 443 (#0)
 *   Trying 192.168.220.13...
 * connected
 * Connected to ovirt.spb.stone.local (192.168.220.13) port 443 (#0)
 * Initializing NSS with certpath: sql:/etc/pki/nssdb
 * warning: ignoring value of ssl.verifyhost
 * skipping SSL peer certificate verification
 * SSL connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA
 * Server certificate:
 * subject: CN=ovirt.spb.stone.local,O=spb.stone.local,C=US
 * start date: Aug 28 09:28:45 2013 GMT
 * expire date: Aug 03 09:28:47 2018 GMT
 * common name: ovirt.spb.stone.local
 * issuer: CN=CA-ovirt.spb.stone.local.95565,O=spb.stone.local,C=US
 * Server auth using Basic with user 'admin@internal'
  POST /api/storageconnections HTTP/1.1
  Authorization: Basic YWRtaW5AaW50ZXJuYWw6bXAyMjFjMg==
  User-Agent: curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0 
  NSS/3.14.3.0 zlib/1.2.5 libidn/1.24 libssh2/1.4.1
  Host: ovirt.spb.stone.local
  Accept: */*
  Content-type: application/xml
  Content-Length: 170
 
 * upload completely sent off: 170 out of 170 bytes
  HTTP/1.1 400 Bad Request
  Date: Wed, 18 Sep 2013 12:05:51 GMT
  Content-Type: text/html;charset=utf-8
  Vary: Accept-Encoding
  Connection: close
  Transfer-Encoding: chunked
 
 htmlheadtitleJBoss Web/7.0.13.Final - Error 
 report/titlestyle!--H1 
 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;}
 H2 
 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;}
  H3
 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;}
  BODY 
 {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;}
 B 
 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} 
 P 
 {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A
  {color
 : black;}A.name {color : black;}HR {color : #525D76;}--/style 
 /headbodyh1HTTP Status 400 - javax.xml.bind.UnmarshalException: 
 unexpected element
 (uri:quot;quot;, local:quot;storage_connectionquot;). Expected elements 
 are
 lt;{}actiongt;,lt;{}agentgt;,lt;{}agentsgt;,lt;{}apigt;,lt;{}applicationgt;,lt;{}applicationsgt;,lt;{}authentication_methodsgt;,lt;{}bodygt;,lt;{}bondinggt;,lt;{}boot_devicesgt;,lt;{}boot_protocolsgt;,lt;{}brickgt;,lt;{}brick_detailsgt;,lt;{}brick_memoryinfogt;,lt;{}brick_statesgt;,lt;{}bricksgt;,lt;{}capabilitiesgt;,lt;{}cdromgt;,lt;{}cdromsgt;,lt;{}certificategt;,lt;{}clustergt;,lt;{}clustersgt;,lt;{}consolegt;,lt;{}content_typesgt;,lt;{}cpugt;,lt;{}cpu_modesgt;,lt;{}cpu_tunegt;,lt;{}cpusgt;,lt;{}creationgt;,lt;{}creation_statesgt;,lt;{}custom_propertiesgt;,lt;{}data_centergt;,lt;{}data_center_statesgt;,lt;{}data_centersgt;,lt;{}detailedLinkgt;,lt;{}detailedLinksgt;,lt;{}diskgt;,lt;{}disk_formatsgt;,lt;{}disk_interfacesgt;,lt;{}disk_statesgt;,lt;{}disksgt;,lt;{}displaygt;,lt;{}display_typesgt;,lt;{}domaingt;,lt;{}domainsgt;,lt;{}error_handlinggt;,lt;{}eventgt;,lt;{}eventsgt;,!
 l

 

Re: [Users] Fwd: Disk statistics

2013-09-22 Thread Michael Pasternak
Hi Joop,

 
 [Users] Disk statistics.eml
 
 Subject:
 [Users] Disk statistics
 From:
 Joop jvdw...@xs4all.nl
 Date:
 09/20/2013 05:04 PM
 
 To:
 users@ovirt.org users@ovirt.org
 
 
 Hi All,
 
 Because of an question on IRC I looked at /api/disks/long-disk-id/statistics 
 but all values (datum) are ZERO. Its a disk of a VM that was running a dd 
 if=/dev/vda
 of=/dev/null so I would expect something to show up (Did various webpage 
 refreshes of the mentioned stats url). I also have ovirt-guest-agent running 
 on that VM, in case it
 matters and the disk is on a NFS data domain.
 
 What am I missing?

how was you trying to see the statistics? via sdk? api directly?,
can you please do the same at vm [1] context,

[1] /api/vms/{vm:id}/disks/{disk:id}/statistics

thanks.

 
 Regards,
 
 Joop
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


-- 

Michael Pasternak
RedHat, ENG-Virtualization RD
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Bug 984737 - UX recommendation. Feedback invited

2013-09-22 Thread Markus Stockhausen
 Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; 
 Gesendet: Freitag, 20. September 2013 22:18
 An: users
 Cc: Eldan Hildesheim
 Betreff: [Users] Bug 984737 - UX recommendation. Feedback invited
 
 Hi,
 
 Here is a solution recommendation for Bug 984737 - usability: webadmin 
 difficulty in assigning client ip, no gateway possible. Please provide any 
 feedback. Here is a brief  flow and description -
 ...
 Rationale: The basic view is kept uncluttered so that the user can focus 
 on the relationships. Upon mouseover, the details show up as it does 
 today  but the action (gear) icon is located prominently next to the name 
 of the entity and it remains in view for the entire time the mouse is over 
 the entity block or the details panel to maximize discoverability. A more 
 specific mouse over event is designed to present the menu of actions 
 available in a more sizable widget upon demand.
 
 Thoughts?
 
 Thanks
 Malini

Wow, nice to see that such a thing is open for discussion on the mailing
list. I stumbled right across the same problem a week ago and asked 
myself if I'm too stupid to operate this dialogue. Two days and a good 
sunday morning coffee later I finally formed an opinion about that. 

Working with an UI always starts with learning the basic concept. If
you go with OVirt your scope will be:

- Left click objects to open a new window with details
- Right clicki objects to get a context menu
- Using quick buttons to make something with a selected object

What you do not learn:

- drag  drop objects (e.g. to move a VM)
- hover over a object to get a drop down menu

That said I like and hate that network setup for being different. It is very
simple to use and understand but it does not fit into the general concept. 
Moving icon locations around will make it easier but it will not mitigate
the stylistic inconsistencies.

Sorry for opening this for a broader discussion but I had to write down
my sudden inspiration. 

Best regards.

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Bug 984737 - UX recommendation. Feedback invited

2013-09-22 Thread Lior Vernia
Hello,

I don't think any of the tooltips is needed, see comments inline. Same
arguments go for bonds on the left side of the dialog, which so far seem
to have been overlooked.

On 20/09/13 23:18, Malini Rao wrote:
 Hi, 
 
 Here is a solution recommendation for Bug 984737 - usability: webadmin 
 difficulty in assigning client ip, no gateway possible. Please provide any 
 feedback. Here is a brief flow and description -
 
 Host Network- Normal.png represents the Host network as it displays regularly 
 - i.e, when the user is not interacting with it in any way. User may roll 
 over any of the blocks in the network diagram 
 
 Host Network- Non-specific roll over v2.png - When the user mouses over any 
 given block/entity in the network diagram in a non-specific area ( i.e within 
 the box but not on the gear icon), details about that entity are available. 
 Also a Gear icon displays next to the name of the entity to indicate some 
 actions are available. 

In the original thread it was all but settled that the information
displayed in this tooltip is useless. I think one tooltip is enough over
a single GUI item. That being said, since I think the other tooltip
should be dropped as well (see comment below), if any tooltip is kept it
should be this one, perhaps with other fields (e.g. IP address).

 
 Host Network- Actions menu.png - When the user mouses over the gear icon, the 
 action menu displays and any other rollover display is replaced with this 
 menu. 

Why an actions menu? The only action we needed here was edit. There
already exists an action menu upon right-click. The original problem was
the visibility/comprehensibility of the pencil icon. If the assumption
is that the gear icon attracts enough attention to hover over, then it
definitely attracts enough attention to be clicked and might as well
just serve as an edit button with a different icon.

 
 Rationale: The basic view is kept uncluttered so that the user can focus on 
 the relationships. Upon mouseover, the details show up as it does today  but 
 the action (gear) icon is located prominently next to the name of the entity 
 and it remains in view for the entire time the mouse is over the entity block 
 or the details panel to maximize discoverability. A more specific mouse over 
 event is designed to present the menu of actions available in a more sizable 
 widget upon demand.
 
 Thoughts?
 
 Thanks
 Malini
 

Lior.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to attach to storage domain (Ovirt 3.2)

2013-09-22 Thread Federico Simoncelli
Hi Dan, it looks like one of the domains is missing:

6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50

Is there any target missing? (disconnected or somehow faulty or
unreachable)

-- 
Federico

- Original Message -
 From: Dan Ferris dfer...@prometheusresearch.com
 To: users@ovirt.org
 Sent: Friday, September 20, 2013 4:01:06 AM
 Subject: [Users] Unable to attach to storage domain (Ovirt 3.2)
 
 Hi,
 
 This is my first post to the list.  I am happy to say that we have been
 using Ovirt for 6 months with a few bumps, but it's mostly been ok.
 
 Until tonight that is...
 
 I had to do a maintenance that required rebooting both of our Hypervisor
 nodes.  Both of them run Fedora Core 18 and have been happy for months.
   After rebooting them tonight, they will not attach to the storage.  If
 it matters, the storage is a server running LIO with a Fibre Channel target.
 
 Vdsm log:
 
 Thread-22::DEBUG::2013-09-19
 21:57:09,392::misc::84::Storage.Misc.excCmd::(lambda) '/usr/bin/dd
 iflag=direct if=/dev/b358e46b-635b-4c0e-8e73-0a494602e21d/metadata
 bs=4096 count=1' (cwd None)
 Thread-22::DEBUG::2013-09-19
 21:57:09,400::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err =
 '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied,
 0.000547161 s, 7.5 MB/s\n'; rc = 0
 Thread-23::DEBUG::2013-09-19
 21:57:16,587::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
 reload operation' got the operation mutex
 Thread-23::DEBUG::2013-09-19
 21:57:16,587::misc::84::Storage.Misc.excCmd::(lambda) u'/usr/bin/sudo
 -n /sbin/lvm vgs --config  devices { preferred_names =
 [\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0
 disable_after_error_count=3 filter = [
 \\a%360014055193f840cb3743f9befef7aa3%\\, \\r%.*%\\ ] }  global {
 locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {
 retain_min = 50  retain_days = 0 }  --noheadings --units b --nosuffix
 --separator | -o
 uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50' (cwd None)
 Thread-23::DEBUG::2013-09-19
 21:57:16,643::misc::84::Storage.Misc.excCmd::(lambda) FAILED: err =
 '  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found\n';
 rc = 5
 Thread-23::WARNING::2013-09-19
 21:57:16,649::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
 ['  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found']
 Thread-23::DEBUG::2013-09-19
 21:57:16,649::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
 reload operation' released the operation mutex
 Thread-23::ERROR::2013-09-19
 21:57:16,650::domainMonitor::208::Storage.DomainMonitorThread::(_monitorDomain)
 Error while collecting domain 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50
 monitoring information
 Traceback (most recent call last):
File /usr/share/vdsm/storage/domainMonitor.py, line 182, in
 _monitorDomain
  self.domain = sdCache.produce(self.sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 97, in produce
  domain.getRealDomain()
File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
  return self._cache._realProduce(self._sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 121, in _realProduce
  domain = self._findDomain(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 152, in _findDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50',)
 
 vgs output (Note that I don't know what the device
 (Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z is) :
 
 [root@node01 vdsm]# vgs
Couldn't find device with uuid Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z.
VG   #PV #LV #SN Attr   VSize   VFree
b358e46b-635b-4c0e-8e73-0a494602e21d   1  39   0 wz--n-   8.19t  5.88t
build  2   2   0 wz-pn- 299.75g 16.00m
fedora 1   3   0 wz--n- 557.88g 0
 
 lvs output:
 
 [root@node01 vdsm]# lvs
Couldn't find device with uuid Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z.
LV   VG
   Attr  LSizePool Origin Data%  Move Log Copy%  Convert
0b8cca47-313f-48da-84f2-154810790d5a
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a   40.00g
0f6f7572-8797-4d84-831b-87dbc4e1aa48
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a  100.00g
19a1473f-c375-411f-9a02-c6054b9a28d2
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a   50.00g
221144dc-51dc-46ae-9399-c0b8e030f38a
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a   40.00g
2386932f-5f68-46e1-99a4-e96c944ac21b
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a   40.00g
3e027010-931b-43d6-9c9f-eeeabbdcd47a
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a2.00g
4257ccc2-94d5-4d71-b21a-c188acbf7ca1
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a  200.00g
4979b2a4-04aa-46a1-be0d-f10be0a1f587
 b358e46b-635b-4c0e-8e73-0a494602e21d -wi-a  100.00g

Re: [Users] Unable to attach to storage domain (Ovirt 3.2)

2013-09-22 Thread Ayal Baron
If I understand correctly you have a storage domain which is built of multiple 
(at least 2) LUNs.
One of these LUNs seems to be missing (Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z 
is an LVM PV UUID).
It looks like you are either not fully connected to the storage server (missing 
a connection) or the LUN mapping in LIO has been changed or that the chap 
password has changed or something.

LVM is able to report the LVs since the PV which contains the metadata is still 
accessible (which is also why you see the VG and why LVM knows that the 
Wy3Ymi... device is missing).

Can you compress and attach *all* of the vdsm.log* files?

- Original Message -
 Hi Dan, it looks like one of the domains is missing:
 
 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50
 
 Is there any target missing? (disconnected or somehow faulty or
 unreachable)
 
 --
 Federico
 
 - Original Message -
  From: Dan Ferris dfer...@prometheusresearch.com
  To: users@ovirt.org
  Sent: Friday, September 20, 2013 4:01:06 AM
  Subject: [Users] Unable to attach to storage domain (Ovirt 3.2)
  
  Hi,
  
  This is my first post to the list.  I am happy to say that we have been
  using Ovirt for 6 months with a few bumps, but it's mostly been ok.
  
  Until tonight that is...
  
  I had to do a maintenance that required rebooting both of our Hypervisor
  nodes.  Both of them run Fedora Core 18 and have been happy for months.
After rebooting them tonight, they will not attach to the storage.  If
  it matters, the storage is a server running LIO with a Fibre Channel
  target.
  
  Vdsm log:
  
  Thread-22::DEBUG::2013-09-19
  21:57:09,392::misc::84::Storage.Misc.excCmd::(lambda) '/usr/bin/dd
  iflag=direct if=/dev/b358e46b-635b-4c0e-8e73-0a494602e21d/metadata
  bs=4096 count=1' (cwd None)
  Thread-22::DEBUG::2013-09-19
  21:57:09,400::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err =
  '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied,
  0.000547161 s, 7.5 MB/s\n'; rc = 0
  Thread-23::DEBUG::2013-09-19
  21:57:16,587::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
  reload operation' got the operation mutex
  Thread-23::DEBUG::2013-09-19
  21:57:16,587::misc::84::Storage.Misc.excCmd::(lambda) u'/usr/bin/sudo
  -n /sbin/lvm vgs --config  devices { preferred_names =
  [\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0
  disable_after_error_count=3 filter = [
  \\a%360014055193f840cb3743f9befef7aa3%\\, \\r%.*%\\ ] }  global {
  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {
  retain_min = 50  retain_days = 0 }  --noheadings --units b --nosuffix
  --separator | -o
  uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
  6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50' (cwd None)
  Thread-23::DEBUG::2013-09-19
  21:57:16,643::misc::84::Storage.Misc.excCmd::(lambda) FAILED: err =
  '  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found\n';
  rc = 5
  Thread-23::WARNING::2013-09-19
  21:57:16,649::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
  ['  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found']
  Thread-23::DEBUG::2013-09-19
  21:57:16,649::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
  reload operation' released the operation mutex
  Thread-23::ERROR::2013-09-19
  21:57:16,650::domainMonitor::208::Storage.DomainMonitorThread::(_monitorDomain)
  Error while collecting domain 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50
  monitoring information
  Traceback (most recent call last):
 File /usr/share/vdsm/storage/domainMonitor.py, line 182, in
  _monitorDomain
   self.domain = sdCache.produce(self.sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 97, in produce
   domain.getRealDomain()
 File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
   return self._cache._realProduce(self._sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 121, in _realProduce
   domain = self._findDomain(sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 152, in _findDomain
   raise se.StorageDomainDoesNotExist(sdUUID)
  StorageDomainDoesNotExist: Storage domain does not exist:
  (u'6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50',)
  
  vgs output (Note that I don't know what the device
  (Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z is) :
  
  [root@node01 vdsm]# vgs
 Couldn't find device with uuid Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z.
 VG   #PV #LV #SN Attr   VSize   VFree
 b358e46b-635b-4c0e-8e73-0a494602e21d   1  39   0 wz--n-   8.19t  5.88t
 build  2   2   0 wz-pn- 299.75g 16.00m
 fedora 1   3   0 wz--n- 557.88g 0
  
  lvs output:
  
  [root@node01 vdsm]# lvs
 Couldn't find device with uuid Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z.
 LV   VG
Attr  LSizePool Origin Data%  Move Log Copy%  Convert
 0b8cca47-313f-48da-84f2-154810790d5a
  

Re: [Users] Unable to attach to storage domain (Ovirt 3.2)

2013-09-22 Thread Dan Ferris
We actually got it working.  Both of us were tired from working late, so 
we didn't find that the missing storage domain was actually one of the 
NFS exports for awhile.  After removing our NFS ISO domain and NFS 
export domain everything came up.


Dan

On 9/22/13 6:08 AM, Ayal Baron wrote:

If I understand correctly you have a storage domain which is built of multiple 
(at least 2) LUNs.
One of these LUNs seems to be missing (Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z 
is an LVM PV UUID).
It looks like you are either not fully connected to the storage server (missing 
a connection) or the LUN mapping in LIO has been changed or that the chap 
password has changed or something.

LVM is able to report the LVs since the PV which contains the metadata is still 
accessible (which is also why you see the VG and why LVM knows that the 
Wy3Ymi... device is missing).

Can you compress and attach *all* of the vdsm.log* files?

- Original Message -

Hi Dan, it looks like one of the domains is missing:

6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50

Is there any target missing? (disconnected or somehow faulty or
unreachable)

--
Federico

- Original Message -

From: Dan Ferris dfer...@prometheusresearch.com
To: users@ovirt.org
Sent: Friday, September 20, 2013 4:01:06 AM
Subject: [Users] Unable to attach to storage domain (Ovirt 3.2)

Hi,

This is my first post to the list.  I am happy to say that we have been
using Ovirt for 6 months with a few bumps, but it's mostly been ok.

Until tonight that is...

I had to do a maintenance that required rebooting both of our Hypervisor
nodes.  Both of them run Fedora Core 18 and have been happy for months.
   After rebooting them tonight, they will not attach to the storage.  If
it matters, the storage is a server running LIO with a Fibre Channel
target.

Vdsm log:

Thread-22::DEBUG::2013-09-19
21:57:09,392::misc::84::Storage.Misc.excCmd::(lambda) '/usr/bin/dd
iflag=direct if=/dev/b358e46b-635b-4c0e-8e73-0a494602e21d/metadata
bs=4096 count=1' (cwd None)
Thread-22::DEBUG::2013-09-19
21:57:09,400::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err =
'1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied,
0.000547161 s, 7.5 MB/s\n'; rc = 0
Thread-23::DEBUG::2013-09-19
21:57:16,587::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex
Thread-23::DEBUG::2013-09-19
21:57:16,587::misc::84::Storage.Misc.excCmd::(lambda) u'/usr/bin/sudo
-n /sbin/lvm vgs --config  devices { preferred_names =
[\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [
\\a%360014055193f840cb3743f9befef7aa3%\\, \\r%.*%\\ ] }  global {
locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {
retain_min = 50  retain_days = 0 }  --noheadings --units b --nosuffix
--separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50' (cwd None)
Thread-23::DEBUG::2013-09-19
21:57:16,643::misc::84::Storage.Misc.excCmd::(lambda) FAILED: err =
'  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found\n';
rc = 5
Thread-23::WARNING::2013-09-19
21:57:16,649::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
['  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found']
Thread-23::DEBUG::2013-09-19
21:57:16,649::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-23::ERROR::2013-09-19
21:57:16,650::domainMonitor::208::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50
monitoring information
Traceback (most recent call last):
File /usr/share/vdsm/storage/domainMonitor.py, line 182, in
_monitorDomain
  self.domain = sdCache.produce(self.sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 97, in produce
  domain.getRealDomain()
File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
  return self._cache._realProduce(self._sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 121, in _realProduce
  domain = self._findDomain(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 152, in _findDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50',)

vgs output (Note that I don't know what the device
(Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z is) :

[root@node01 vdsm]# vgs
Couldn't find device with uuid Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z.
VG   #PV #LV #SN Attr   VSize   VFree
b358e46b-635b-4c0e-8e73-0a494602e21d   1  39   0 wz--n-   8.19t  5.88t
build  2   2   0 wz-pn- 299.75g 16.00m
fedora 1   3   0 wz--n- 557.88g 0

lvs output:

[root@node01 vdsm]# lvs
Couldn't find device with uuid Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z.
LV   

Re: [Users] Unable to attach to storage domain (Ovirt 3.2)

2013-09-22 Thread Ayal Baron


- Original Message -
 We actually got it working.  Both of us were tired from working late, so
 we didn't find that the missing storage domain was actually one of the
 NFS exports for awhile.  After removing our NFS ISO domain and NFS
 export domain everything came up.
 

Thanks for the update.
Coincidentally we have a patch upstream that should ignore the iso and export 
domains in such situations (http://gerrit.ovirt.org/#/c/17986/) and would void 
the need for you to deactivate them.

 Dan
 
 On 9/22/13 6:08 AM, Ayal Baron wrote:
  If I understand correctly you have a storage domain which is built of
  multiple (at least 2) LUNs.
  One of these LUNs seems to be missing
  (Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z is an LVM PV UUID).
  It looks like you are either not fully connected to the storage server
  (missing a connection) or the LUN mapping in LIO has been changed or that
  the chap password has changed or something.
 
  LVM is able to report the LVs since the PV which contains the metadata is
  still accessible (which is also why you see the VG and why LVM knows that
  the Wy3Ymi... device is missing).
 
  Can you compress and attach *all* of the vdsm.log* files?
 
  - Original Message -
  Hi Dan, it looks like one of the domains is missing:
 
  6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50
 
  Is there any target missing? (disconnected or somehow faulty or
  unreachable)
 
  --
  Federico
 
  - Original Message -
  From: Dan Ferris dfer...@prometheusresearch.com
  To: users@ovirt.org
  Sent: Friday, September 20, 2013 4:01:06 AM
  Subject: [Users] Unable to attach to storage domain (Ovirt 3.2)
 
  Hi,
 
  This is my first post to the list.  I am happy to say that we have been
  using Ovirt for 6 months with a few bumps, but it's mostly been ok.
 
  Until tonight that is...
 
  I had to do a maintenance that required rebooting both of our Hypervisor
  nodes.  Both of them run Fedora Core 18 and have been happy for months.
 After rebooting them tonight, they will not attach to the storage.  If
  it matters, the storage is a server running LIO with a Fibre Channel
  target.
 
  Vdsm log:
 
  Thread-22::DEBUG::2013-09-19
  21:57:09,392::misc::84::Storage.Misc.excCmd::(lambda) '/usr/bin/dd
  iflag=direct if=/dev/b358e46b-635b-4c0e-8e73-0a494602e21d/metadata
  bs=4096 count=1' (cwd None)
  Thread-22::DEBUG::2013-09-19
  21:57:09,400::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err =
  '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied,
  0.000547161 s, 7.5 MB/s\n'; rc = 0
  Thread-23::DEBUG::2013-09-19
  21:57:16,587::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
  reload operation' got the operation mutex
  Thread-23::DEBUG::2013-09-19
  21:57:16,587::misc::84::Storage.Misc.excCmd::(lambda) u'/usr/bin/sudo
  -n /sbin/lvm vgs --config  devices { preferred_names =
  [\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0
  disable_after_error_count=3 filter = [
  \\a%360014055193f840cb3743f9befef7aa3%\\, \\r%.*%\\ ] }  global {
  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {
  retain_min = 50  retain_days = 0 }  --noheadings --units b --nosuffix
  --separator | -o
  uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
  6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50' (cwd None)
  Thread-23::DEBUG::2013-09-19
  21:57:16,643::misc::84::Storage.Misc.excCmd::(lambda) FAILED: err =
  '  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found\n';
  rc = 5
  Thread-23::WARNING::2013-09-19
  21:57:16,649::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
  ['  Volume group 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50 not found']
  Thread-23::DEBUG::2013-09-19
  21:57:16,649::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
  reload operation' released the operation mutex
  Thread-23::ERROR::2013-09-19
  21:57:16,650::domainMonitor::208::Storage.DomainMonitorThread::(_monitorDomain)
  Error while collecting domain 6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50
  monitoring information
  Traceback (most recent call last):
  File /usr/share/vdsm/storage/domainMonitor.py, line 182, in
  _monitorDomain
self.domain = sdCache.produce(self.sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 97, in produce
domain.getRealDomain()
  File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 121, in _realProduce
domain = self._findDomain(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 152, in _findDomain
raise se.StorageDomainDoesNotExist(sdUUID)
  StorageDomainDoesNotExist: Storage domain does not exist:
  (u'6cf7e7e9-3ae5-4645-a29c-fb17ecb38a50',)
 
  vgs output (Note that I don't know what the device
  (Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z is) :
 
  [root@node01 vdsm]# vgs
  Couldn't find device with uuid
  Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z.
  

[Users] problem with engine 3.3 on f19 and java path after update

2013-09-22 Thread Gianluca Cecchi
Hello,
I installed ovirt-engine on F19. No hosts yet.
ovirt-engine service started ok
Today after updateing java, it fails with this message:

Sep 22 13:55:13 ovirt 2013-09-22 13:55:13,637 ovirt-engine: ERROR
run:485 Error: Directory '/
usr/lib/jvm/jre-1.7.0-openjdk-1.7.0.60-2.4.2.0.fc19.x86_64' is
required but missing
Sep 22 13:55:13 ovirt systemd[1]: ovirt-engine.service: main process
exited, code=exited, sta
tus=1/FAILURE
Sep 22 13:55:13 ovirt systemd[1]: Unit ovirt-engine.service entered
failed state.
Sep 22 13:55:13 ovirt avahi-daemon[418]: Registering new address
record for fe80::250:56ff:fe
9f:1016 on eth0.*.

When installed:
Sep 16 16:58:22 Installed: 1:java-1.7.0-openjdk-1.7.0.60-2.4.2.0.fc19.x86_64
Sep 16 16:59:00 Installed:
1:java-1.7.0-openjdk-devel-1.7.0.60-2.4.2.0.fc19.x86_64

Today:
Sep 22 13:52:57 Updated: 1:java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
Sep 22 13:53:04 Updated: 1:java-1.7.0-openjdk-devel-1.7.0.60-2.4.2.4.fc19.x86_64

I noticed today during update somethng regarding java and alternatives
as if default path was not defined...

This is my alternatives situation:

[root@ovirt ~]# alternatives --list
jre_1.7.0_openjdk   auto
/usr/lib/jvm/jre-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
java_sdk_openjdkmanual
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
libnssckbi.so.x86_64auto/usr/lib64/pkcs11/p11-kit-trust.so
whois   auto/usr/bin/jwhois
javac   manual
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/bin/javac
jsp auto/usr/share/java/tomcat-jsp-2.2-api.jar
mta auto/usr/sbin/sendmail.sendmail
java_sdk_1.7.0_openjdk  auto
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
jre_openjdk manual
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/jre
jre_gcj auto/usr/lib/jvm/jre-1.5.0-gcj
cifs-idmap-plugin   auto/usr/lib64/cifs-utils/idmapwb.so
jre_1.5.0   auto/usr/lib/jvm/jre-1.5.0-gcj
jaxp_transform_impl auto/usr/share/java/xalan-j2.jar
jaxp_parser_implauto/usr/share/java/xerces-j2.jar
javamanual
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/jre/bin/java
elspec  auto/usr/share/java/tomcat-el-2.2-api.jar
servlet auto/usr/share/java/tomcat-servlet-3.0-api.jar
jre_1.7.0   manual
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/jre
java_sdk_1.7.0  manual
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64

How could this be fixed and any recommendation to put into the wiki
regarding java config?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] problem with engine 3.3 on f19 and java path after update

2013-09-22 Thread Alon Bar-Lev


- Original Message -
 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: users users@ovirt.org
 Sent: Sunday, September 22, 2013 5:16:48 PM
 Subject: [Users] problem with engine 3.3 on f19 and java path after update
 
 Hello,
 I installed ovirt-engine on F19. No hosts yet.
 ovirt-engine service started ok
 Today after updateing java, it fails with this message:
 
 Sep 22 13:55:13 ovirt 2013-09-22 13:55:13,637 ovirt-engine: ERROR
 run:485 Error: Directory '/
 usr/lib/jvm/jre-1.7.0-openjdk-1.7.0.60-2.4.2.0.fc19.x86_64' is
 required but missing
 Sep 22 13:55:13 ovirt systemd[1]: ovirt-engine.service: main process
 exited, code=exited, sta
 tus=1/FAILURE
 Sep 22 13:55:13 ovirt systemd[1]: Unit ovirt-engine.service entered
 failed state.
 Sep 22 13:55:13 ovirt avahi-daemon[418]: Registering new address
 record for fe80::250:56ff:fe
 9f:1016 on eth0.*.
 
 When installed:
 Sep 16 16:58:22 Installed: 1:java-1.7.0-openjdk-1.7.0.60-2.4.2.0.fc19.x86_64
 Sep 16 16:59:00 Installed:
 1:java-1.7.0-openjdk-devel-1.7.0.60-2.4.2.0.fc19.x86_64
 
 Today:
 Sep 22 13:52:57 Updated: 1:java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
 Sep 22 13:53:04 Updated:
 1:java-1.7.0-openjdk-devel-1.7.0.60-2.4.2.4.fc19.x86_64
 
 I noticed today during update somethng regarding java and alternatives
 as if default path was not defined...
 
 This is my alternatives situation:
 
 [root@ovirt ~]# alternatives --list
 jre_1.7.0_openjdk   auto
 /usr/lib/jvm/jre-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
 java_sdk_openjdkmanual
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
 libnssckbi.so.x86_64auto/usr/lib64/pkcs11/p11-kit-trust.so
 whois   auto/usr/bin/jwhois
 javac   manual
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/bin/javac
 jsp auto/usr/share/java/tomcat-jsp-2.2-api.jar
 mta auto/usr/sbin/sendmail.sendmail
 java_sdk_1.7.0_openjdk  auto
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
 jre_openjdk manual
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/jre
 jre_gcj auto/usr/lib/jvm/jre-1.5.0-gcj
 cifs-idmap-plugin   auto/usr/lib64/cifs-utils/idmapwb.so
 jre_1.5.0   auto/usr/lib/jvm/jre-1.5.0-gcj
 jaxp_transform_impl auto/usr/share/java/xalan-j2.jar
 jaxp_parser_implauto/usr/share/java/xerces-j2.jar
 javamanual
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/jre/bin/java
 elspec  auto/usr/share/java/tomcat-el-2.2-api.jar
 servlet auto/usr/share/java/tomcat-servlet-3.0-api.jar
 jre_1.7.0   manual
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64/jre
 java_sdk_1.7.0  manual
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.2.4.fc19.x86_64
 
 How could this be fixed and any recommendation to put into the wiki
 regarding java config?

True[1]

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1009863

 
 Thanks,
 Gianluca
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] All VMs disappeared

2013-09-22 Thread Frank Wall

Hi Yair,

On 15.09.2013 21:19, Yair Zaslavsky wrote:

I verified that commit c54353b11fb9ac2cb59cbc3e04b58e9114cbecba solved
this issue as well.


I can confirm that the issue is solved. Thank you!

ovirt-engine-3.3.0-4.fc19.noarch
ovirt-engine-backend-3.3.0-4.fc19.noarch
ovirt-engine-cli-3.3.0.4-1.20130829.git5ae0f2c.fc19.noarch
ovirt-engine-dbscripts-3.3.0-4.fc19.noarch
ovirt-engine-lib-3.3.0-4.fc19.noarch
ovirt-engine-restapi-3.3.0-4.fc19.noarch
ovirt-engine-sdk-python-3.3.0.6-1.fc19.noarch
ovirt-engine-setup-3.3.0-4.fc19.noarch
ovirt-engine-setup-plugin-allinone-3.3.0-4.fc19.noarch
ovirt-engine-tools-3.3.0-4.fc19.noarch
ovirt-engine-userportal-3.3.0-4.fc19.noarch
ovirt-engine-webadmin-portal-3.3.0-4.fc19.noarch
ovirt-host-deploy-1.1.1-1.fc19.noarch
ovirt-host-deploy-java-1.1.1-1.fc19.noarch
ovirt-host-deploy-offline-1.1.1-1.fc19.noarch
ovirt-image-uploader-3.3.0-1.fc19.noarch
ovirt-iso-uploader-3.3.0-1.fc19.noarch
ovirt-log-collector-3.3.0-1.fc19.noarch


Thanks
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users