Hi,
we got the following problem:
we create / start / stop
hole vms /data centers / storage etc
(basically: everything ovirt can handle
via REST-API)
But if you want to know e.g. the status
of a vm (or anything) you need to constantly
poll the API.
This is not what we desire to do, as it
does
On 12/17/2013 03:08 AM, Sven Kieske wrote:
Hi,
we got the following problem:
we create / start / stop
hole vms /data centers / storage etc
(basically: everything ovirt can handle
via REST-API)
But if you want to know e.g. the status
of a vm (or anything) you need to constantly
poll the API.
- Original Message -
From: Gianluca Cecchi gianluca.cec...@gmail.com
To: Tomas Jelinek tjeli...@redhat.com
Cc: Marc-André Lureau mlur...@redhat.com,
spice-de...@lists.freedesktop.org, users users@ovirt.org
Sent: Tuesday, December 17, 2013 12:04:08 AM
Subject: Re: [Spice-devel]
Note that you can usually get all the information you want using 1 API
call, which should still scale.
For instance, /ovirt-engine/api/vms will give you a list of all VMs and
their statuses, so you can just run an XPath and get the status of all of
them.
On Tue, Dec 17, 2013 at 10:18 AM, Itamar
- Original Message -
@Marc-Andre: do you happen to know what where the win/linux versions where
this support has been added to spice? I did not find it anywhere...
checking the NEWS files:
spice-gtk v0.15
virt-viewer v0.5.5
spice-xpi v2.8.90
- Original Message -
@Marc-Andre: do you happen to know what where the win/linux versions where
this support has been added to spice? I did not find it anywhere...
checking the NEWS files:
spice-gtk v0.15
virt-viewer v0.5.5
spice-xpi v2.8.90
Thnx, enriched the wiki with
Thank you both. I'll see that on Friday and let you know the results.
Regards,
El 17/12/13 05:32, Itamar Heim escribió:
On 12/17/2013 01:39 AM, Sergey Gotliv wrote:
Juan,
Ping me when you doing that, Friday is not a working day but usually
I am checking my mails anyway.
I tested option #2
On 12/11/2013 03:57 AM, Pascal Jakobi wrote:
Context :
vdsm-4.13.0-11.fc19.x86_64
sanlock-2.8-1.fc19.x86_64
Creating storage domain (NFS) fails with error in engine.log such as
Error code AcquireHostIdFailure and error message VDSGenericException:
VDSErrorException: Failed to CreateStorage
And
You can use the DWH API to check these things.
The status is sampled and stored for most entities every 1 minute by default
(and can be set to less than that).
Yaniv
- Original Message -
From: Sven Kieske s.kie...@mittwald.de
To: users@ovirt.org, engine-de...@ovirt.org
Sent:
On Mon, Dec 16, 2013 at 06:01:51PM -0500, Antoni Segura Puimedon wrote:
- Original Message -
From: Moti Asayag masa...@redhat.com
To: Antoni Segura Puimedon asegu...@redhat.com
Cc: users@ovirt.org, Juan Pablo Lorier jplor...@gmail.com
Sent: Monday, December 16, 2013 8:43:24 PM
Hi Sander,
This is a known issue caused by generateDS python bindings we use,
it's extremely slow in python-xml marshalling, and unable to recognize
cyclic referencing in the objects,
i'm planning to upgrade in 3.4 from 2.9a to 2.12, if it won't help, we may
consider other options.
On
On Mon, Dec 16, 2013 at 10:23:36AM -0500, Bob Doolittle wrote:
On 12/16/13 08:44, Dan Kenigsberg wrote:
On Mon, Dec 16, 2013 at 09:58:15AM +0200, Itamar Heim wrote:
On 12/13/2013 09:15 PM, Bob Doolittle wrote:
Hi,
With VMware ESX, when you edit the CD device you have of course the
option
On Mon, Dec 16, 2013 at 10:24:18PM +0100, tristan...@libero.it wrote:
hello,
anyone know the expected fix of this bug ?
https://bugzilla.redhat.com/show_bug.cgi?id=1017289
that permit use bluster w/o fuse overhead ?
There are a couple of hackish patches linked to Bug 1022961 - Running a
On Tue, 2013-12-17 at 02:39 -0500, Itamar Heim wrote:
On 12/16/2013 05:47 AM, René Koch wrote:
Hi,
-Original message-
From:Sander Grendelman san...@grendelman.com
Sent: Monday 16th December 2013 11:00
To: Joop jvdw...@xs4all.nl
Cc: users@ovirt.org
Subject: Re: [Users]
On Tue, Dec 17, 2013 at 11:26 AM, Tomas Jelinek wrote:
- Original Message -
@Marc-Andre: do you happen to know what where the win/linux versions where
this support has been added to spice? I did not find it anywhere...
checking the NEWS files:
spice-gtk v0.15
virt-viewer
hey,
On Tue, Dec 17, 2013 at 04:39:38PM +0100, Gianluca Cecchi wrote:
Hope that can get it also on Fedora 19, if the bug referred for 6.x
and resolved in 6.5 is this one I found:
https://bugzilla.redhat.com/show_bug.cgi?id=994613
should be only a matter to port upstram patch to fedora 19,
On Tue, Dec 17, 2013 at 5:37 PM, Christophe Fergeau wrote:
hey,
On Tue, Dec 17, 2013 at 04:39:38PM +0100, Gianluca Cecchi wrote:
Hope that can get it also on Fedora 19, if the bug referred for 6.x
and resolved in 6.5 is this one I found:
https://bugzilla.redhat.com/show_bug.cgi?id=994613
Hello,
I try to setup Ovirt with this configuration :
1 Engine Ovirt 3.3 (upgraded from 3.2)
2 Nodes Ovirt 3.3
Centos 6.5
GlusterFS
I tried to follow the gluster documentation :
http://www.gluster.org/2013/09/ovirt-3-3-glusterized/, using POSIXFS as
I use CentOs and not Fedora.
The problem
On Tue, Dec 17, 2013 at 05:53:06PM +0100, Gianluca Cecchi wrote:
No, the fact is that if I leave the default auto option, actually it
tries the spice-xpi plugin that then fails and not the remote-viewer.
So I have to manually specify for every VM to use native.
Is this expected behavior if
Hi Michal,
Thanks for fixing the product information on the bug-report.
I see that single-click is used to select a VM to show details on the right
at the moment, so I understand your reluctance to change the default
behavior. That said, I think having users look for the console icon in the
On 12/17/2013 02:00 AM, tristan...@libero.it wrote:
yes my idea is to start with 1 node ( storage+compute ) and then expand with
more server to add storage and compute.
what do you think ?
Definitely doable. I have not come across many instances of this in the
community and would recommend
On Tue, Dec 17, 2013 at 11:25:40AM -0500, Bob Doolittle wrote:
On 12/17/2013 11:22 AM, Bob Doolittle wrote:
I inserted some media, and tried again. I got the same output
error. I then tried unmounting the media from the host itself, and
retried the command, but the same result.
In the log
Hi Fabiand,
I have a iSCSI partition on my stateless node. But the problem is I can't
format the partition. The same partition can be formatted and mounted on a
regular machine without problems.
[root@localhost ~]# mkfs.ext3 /dev/sda
mke2fs 1.42.7 (21-Jan-2013)
/dev/sda is entire device, not
Hi,
My node install followed the automatic install instructions in
Now that we’ve got a self hosted engine, has any one given any thought to
allowing one engine to remotely control another?
The scenario I’m imagining has one cluster at a DC with 4 nodes, and another
cluster at a different DC with 3 nodes. Connectivity is normally pretty good,
but it’s been
Should read “I was still running 3.2.3” there…
On Dec 17, 2013, at 10:21 PM, Darrell Budic darrell.bu...@zenfire.com wrote:
Would it be possible to start adding the Ovirt version a feature became
available in to the various Features pages?
This one as an example:
On 12/17/2013 9:18 AM, René Koch (ovido) wrote:
I used this image for installing Solaris 11: sol-11_1-text-x86.iso,
which is the Solaris 11.1 text based installer...
Works fine for me on my oVirt 3.3 setup - installed Solaris 11.1 right
now - (did also work fine in oVirt 3.2 and on a RHEL 6
Any recommended file system to use to hold VM disk images? EXT4? BTRFS(disable
COW I assume?)? ZFS? XFS?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hello!
In order to increase disk space I want to add a new disk drive to ovirt
node. After adding this should I proceed as normal - pvcreate, vgcreate,
lvcreate and so on - or these configuration will not persist?
Thx
___
Users mailing list
Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von
quot;Blaster [blas...@556nato.com]
Gesendet: Mittwoch, 18. Dezember 2013 06:24
An: users@ovirt.org
Betreff: [Users] Recommended file system?
Any recommended file system to use to hold VM disk images? EXT4?
30 matches
Mail list logo