I am the reporter of the below bz, and the author of the fix [1]. However, I
don't know jboss at all - just searched around, found somewhere the option
'deployment-timeout' (not sure where, probably a quick search can find this)
and verified that the fix works. It seems like neither any of the
On Wed, May 14, 2014 at 1:45 AM, Itamar Heim ih...@redhat.com wrote:
On 05/13/2014 05:22 AM, Sven Kieske wrote:
Am 13.05.2014 11:12, schrieb Dan Kenigsberg:
If you are planning to run only a couple of VMs on a single laptop,
going to basics and using qemu/libvirt directly, or gnome-boxes,
Il 13/05/2014 20:29, Dan Kenigsberg ha scritto:
On Tue, May 13, 2014 at 12:07:09PM +0200, Sandro Bonazzola wrote:
Il 12/05/2014 23:53, Bob Doolittle ha scritto:
On 05/12/2014 02:49 PM, Bob Doolittle wrote:
Hi,
I'm trying to set up a fresh system on F19, using oVirt 3.4.
Hi Bob, can you
- Original Message -
From: R P Herrold herr...@owlriver.com
To: Sven Kieske s.kie...@mittwald.de
Cc: users@ovirt.org
Sent: Tuesday, May 13, 2014 7:31:14 PM
Subject: [ovirt-users] getting 404 after fresh install of oVirt 3.4 on CentOS
6.5 (+ solution)
On Tue, 13 May 2014, Sven
Hi,
We're going to start composing oVirt 3.5.0 Alpha on 2014-05-16 08:00 UTC from
master branches.
The bug tracker [1] shows the following proposed blockers to be reviewed:
Bug ID Whiteboard Status Summary
1001100 integration NEW Add log gathering for a new ovirt
On Tue, May 13, 2014 at 6:44 PM, Einav Cohen eco...@redhat.com wrote:
we are typically not updating translations for versions that were already
GA'd (3.4), therefore I recommend to concentrate at this point only on the
'master' version (which is currently tracking the ovirt-engine-3.5
On 05/14/2014 04:06 AM, Doug Bishop wrote:
Here ya go :
[root@ovirt-vmtest ~]# /usr/libexec/qemu-kvm -M ?
Supported machines are:
pc RHEL 6.4.0 PC (alias of rhel6.4.0)
rhel6.4.0 RHEL 6.4.0 PC (default)
rhel6.3.0 RHEL 6.3.0 PC
rhel6.2.0 RHEL 6.2.0 PC
rhel6.1.0 RHEL 6.1.0 PC
rhel6.0.0
Citējot Sahina Bose sab...@redhat.com :
On 05/13/2014 07:27 PM, Vadims Korsaks wrote:
Citējot Humble Devassy Chirammal
humble.deva...@gmail.com :
|
| Citējot Vijay Bellur vbel...@redhat.com :
| On 05/11/2014 02:04 AM, Vadims Korsaks
wrote:
| HI!
|
| Created 2
On 05/14/2014 02:36 PM, Vadims Korsaks wrote:
Citējot Sahina Bose sab...@redhat.com :
On 05/13/2014 07:27 PM, Vadims Korsaks wrote:
Citējot Humble Devassy Chirammal
humble.deva...@gmail.com :
|
| Citējot Vijay Bellur vbel...@redhat.com :
| On 05/11/2014 02:04 AM, Vadims
tnx a lot!! now it's much better - from VM i can
get dd with ~ 60 MB/s
this is still ~ x2 lower than from host, but x3
better than it was before :)
cool :)
BTW could not found GUI got Optimize for virt.
store option in oVirt 3.5
Thanks Sahina for the inputs here.
--Humble
On Tue, May
Citējot Sahina Bose sab...@redhat.com:
On 05/14/2014 02:36 PM, Vadims Korsaks wrote:
Citējot Sahina Bose sab...@redhat.com :
On 05/13/2014 07:27 PM, Vadims Korsaks wrote:
Citējot Humble Devassy Chirammal
humble.deva...@gmail.com :
Citējot Vijay Bellur vbel...@redhat.com :
On
Thanks Sandro
On 2014-05-12 16:43, Sandro Bonazzola wrote:
Il
12/05/2014 14:11, Jim Rippon ha scritto:
Hi all, I'm running a
production stack on 3.3.3 with Three datacentres (one in my DMZ with two
hosts with NFS, one in my DMZ on the engine host with local storage and
one internal with
On Wed, May 14, 2014 at 08:43:40AM +0200, John Smith wrote:
On Wed, May 14, 2014 at 1:45 AM, Itamar Heim ih...@redhat.com wrote:
On 05/13/2014 05:22 AM, Sven Kieske wrote:
Am 13.05.2014 11:12, schrieb Dan Kenigsberg:
If you are planning to run only a couple of VMs on a single laptop,
On Wed, May 14, 2014 at 08:49:06AM +0200, Sandro Bonazzola wrote:
Il 13/05/2014 20:29, Dan Kenigsberg ha scritto:
On Tue, May 13, 2014 at 12:07:09PM +0200, Sandro Bonazzola wrote:
Il 12/05/2014 23:53, Bob Doolittle ha scritto:
On 05/12/2014 02:49 PM, Bob Doolittle wrote:
Hi,
I'm
Il 08/05/2014 18:06, Paul Heinlein ha scritto:
On Thu, 8 May 2014, Gabi C wrote:
Which is the proper way of keeping nodes up to date?
I tried from WebUI, after putting node on mainetnance [clear.cache.gif]
Host is in maintenance mode, you can Activate it by pressing the Activate
button.
On Tue, May 13, 2014 at 07:38:00PM -0400, Itamar Heim wrote:
On 05/13/2014 12:38 PM, supo...@logicworks.pt wrote:
Hi,
I'm trying to install ovirt AllnOne, following these steps:
Install FC19
yum localinstall
http://resources.ovirt.org/releases/ovirt-release.noarch.rpm
Please use
On Tue, May 13, 2014 at 06:06:00PM -0700, Doug Bishop wrote:
Here ya go :
[root@ovirt-vmtest ~]# /usr/libexec/qemu-kvm -M ?
Supported machines are:
pc RHEL 6.4.0 PC (alias of rhel6.4.0)
rhel6.4.0 RHEL 6.4.0 PC (default)
rhel6.3.0 RHEL 6.3.0 PC
rhel6.2.0 RHEL 6.2.0 PC
rhel6.1.0
Hi,
I have a very weird issue with the last version of oVirt 3.4.1, I've upgraded
because I already had this issue.
It came when I wanted to re-create a new export domain on my first hypervisor
larger than the first one.
After this point, the datacenter came in a non responsive state, so I've
On 05/14/2014 02:55 PM, Vadims Korsaks wrote:
Citējot Sahina Bose sab...@redhat.com:
On 05/14/2014 02:36 PM, Vadims Korsaks wrote:
Citējot Sahina Bose sab...@redhat.com :
On 05/13/2014 07:27 PM, Vadims Korsaks wrote:
Citējot Humble Devassy Chirammal
humble.deva...@gmail.com :
Citējot Vijay
I could really use some help on this one. My efforts to debug VDSM via
instrumenting the python code are not working - the compiled code must be
cached somehow.
Something is wrong with the way the multipathd service is being restarted.
It doesn't look too me that systemctl is even being called
On 05/14/2014 06:11 AM, Dan Kenigsberg wrote:
On Tue, May 13, 2014 at 06:06:00PM -0700, Doug Bishop wrote:
Here ya go :
[root@ovirt-vmtest ~]# /usr/libexec/qemu-kvm -M ?
Supported machines are:
pc RHEL 6.4.0 PC (alias of rhel6.4.0)
rhel6.4.0 RHEL 6.4.0 PC (default)
rhel6.3.0 RHEL
On 05/14/2014 05:45 AM, Dan Kenigsberg wrote:
On Wed, May 14, 2014 at 08:43:40AM +0200, John Smith wrote:
On Wed, May 14, 2014 at 1:45 AM, Itamar Heim ih...@redhat.com wrote:
On 05/13/2014 05:22 AM, Sven Kieske wrote:
Am 13.05.2014 11:12, schrieb Dan Kenigsberg:
If you are planning to run
Hi,
We're going to start composing oVirt 3.4.2 RC on 2014-05-27 08:00 UTC from 3.4
branches.
The bug tracker [1] shows no blocking bugs for the release
There are still 75 bugs [2] targeted to 3.4.2.
Excluding node and documentation bugs we still have 47 bugs [3] targeted to
3.4.2.
Hello everybody!!
Indeed it was a bug of the version 3.3.0 that i was using, today I update
the oVirt engine to version 3.4.1 and the problem disappeared.
Thank you all for the help.
2014-05-13 20:31 GMT-03:00 Itamar Heim ih...@redhat.com:
On 05/13/2014 11:07 AM, Fagner Patricio wrote:
Hi Neil,
Can u please attach the logs of engine and VDSM.
What there is in the event log, was there any operation being done on
the disk before?
regards,
Maor
On 05/14/2014 03:35 PM, Neil wrote:
Hi guys,
I'm trying to remove a VM and reclaim the space that the VM was using.
This particular
Hi Neil,
The luns are showing as select-able since they are not being used by
the engine, although other setups might use them which we can't be aware of.
We can't be sure that the LUns are used actively or not because there
could also be a VG which was not cleaned up properly (For example of an
Allon, could it be related to
https://bugzilla.redhat.com/show_bug.cgi?id=1083476 ?
- Original Message -
From: VONDRA Alain avon...@unicef.fr
To: users@ovirt.org
Sent: Wednesday, May 14, 2014 1:29:13 PM
Subject: [ovirt-users] Datacenter in no more responsive
Hi,
I have a very weird
Hi Maor,
Thanks for the details.
These LUNS were assigned to the engine in the past, the oVirt is the
only system to use the LUNS
Also they have never been destroyed or even belonged to an old setup,
this is the original setup.
All of the LUNS are part of a single RAID array in the SAN which
Hi again,
Just to complete my first mail, I've tried 4 or 5 times a new install from
scratch using engine-cleanup and engine-setup, but nothing works...
Tank you
Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF
2014-05-09 15:55 GMT+02:00 Nicolas Ecarnot nico...@ecarnot.net:
Hi,
On our second oVirt setup in 3.4.0-1.el6 (that was running fine), I did a
yum upgrade on the engine (...sigh...).
Then rebooted the engine.
This machine is hosting the NFS export domain.
Though the VM are still running, the
- Original Message -
From: Ricardo Esteves ricardo.m.este...@gmail.com
To: Federico Simoncelli fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, May 14, 2014 1:45:53 AM
Subject: RE: [ovirt-users] oVirt 3.2 - iSCSI offload (broadcom - bnx2i)
In attachment follows the defaults
Le 14/05/2014 15:36, Giorgio Bersano a écrit :
Following the URL above and the BZ opened by the user
(https://bugzilla.redhat.com/show_bug.cgi?id=1072900), I see this has been
corrected in 3.4.1. What gives a perfectly connected NFS export domain, but
empty?
Hi,
sorry for jumping late on an
This bit me as well (for Fedora, however).
There are a whole bunch of ovirt-release variants lying around in
various parts of the repository that should be cleaned up as they are
merely attractive nuisances
(http://en.wikipedia.org/wiki/Attractive_nuisance_doctrine).
Particularly since the
Ok, I installed Version 3.4.1-1.fc19, and now host is up, but have np acess to
any storage, including iso.
On the Data Centers tab, local_datacenter there is no storage.
No configuration on the /etc/exports file. I included 2 lines:
[root@ovirt ~]# cat /etc/exports
/home/iso
Didi,
Some background is that I first ran into this when I added dwh+reports
to my 3.4 setup, and I found your bz then and did the fix (with
changes below) and it worked for me. The fixed ovirt-engine.xml.in got
overwritten when I upgraded to 3.4.1 and I reapplied .
But I did change
The only thing I've been able to find on this is
http://lists.freedesktop.org/archives/spice-devel/2014-February/016063.htmlhttp://lists.freedesktop.org/archives/spice-devel/2014-February/016063.html.
I was wondering if there have been any developments since then and if not,
could somebody
On 05/14/2014 10:47 AM, Bob Doolittle wrote:
This bit me as well (for Fedora, however).
There are a whole bunch of ovirt-release variants lying around in
various parts of the repository that should be cleaned up as they are
merely attractive nuisances
On 05/14/2014 02:28 PM, Itamar Heim wrote:
On 05/14/2014 10:47 AM, Bob Doolittle wrote:
It would be helpful if somebody would:
1. Clean up the hazardous versions of ovirt-release scattered throughout
2. Fix the Quick Start Guide links to point someplace useful for
ovirt-release
can you please
Hi all,
sorry for the late reply.
I noticed that I missed the deviceId property on my additional-nic line below,
but I can confirm that the engine vm (installed with my previously modified
template in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in as
outlined below) is still up and
- Original Message -
From: Federico Simoncelli fsimo...@redhat.com
To: Ricardo Esteves ricardo.m.este...@gmail.com
Cc: users@ovirt.org
Sent: Wednesday, May 14, 2014 3:47:58 PM
Subject: Re: [ovirt-users] oVirt 3.2 - iSCSI offload (broadcom - bnx2i)
- Original Message -
Thanks John.
When hosted-engine aborts, it uninstalls everything.
So there is no webadmin available.
I've tried modifying the VDSM python code (e.g.
/usr/share/vdsm/storage/multipath.py and
/usr/lib64/python2.7/site-packages/vdsm/tool/service.py) to see/work
around what's going wrong, but
On Wed, 2014-05-14 at 20:06 -0400, Bob Doolittle wrote:
Thanks John.
When hosted-engine aborts, it uninstalls everything.
So there is no webadmin available.
I've tried modifying the VDSM python code (e.g.
/usr/share/vdsm/storage/multipath.py and
42 matches
Mail list logo