I have a couple of old DL380 G5's and i am putting them into their own
cluster for testing various things out.
The install of 3.1 from dreyou goes fine onto them but when they try to
activate i get the following
Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the
2013/1/25 Tom Brown t...@ng23.net:
I have a couple of old DL380 G5's and i am putting them into their own
cluster for testing various things out.
The install of 3.1 from dreyou goes fine onto them but when they try to
activate i get the following
Host xxx.xxx.net.uk moved to Non-Operational
In oVirt 3.1 GlusterFS support was added. It was an easy way to replicate your
virtual machine storage without too much hassle.
There are two main howtos:
*
http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-either-nfs-or-posix-native-file-system-engine
(Robert
I have a couple of old DL380 G5's and i am putting them into their own
cluster for testing various things out.
The install of 3.1 from dreyou goes fine onto them but when they try to
activate i get the following
Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet
I've recently updated my
http://www.ovirt.org/User:Adrian15/oVirt_engine_migration oVirt engine
migration howto with the
http://www.ovirt.org/User:Adrian15/oVirt_engine_migration#Update_VdcBootStrapUrl
Update VdcBootStrapUrl section.
My next move is to move this section into the
Hey,
I wanted to report that trying to dd from the storage-side always makes the
VM´s OS see two itdentically small HDD's. The only work-around I´ve found that
works is to create a new, bigger drive, boot the VM from a live-CD and dd
from there. When rebooted after completion, the VM´s OS then
On 01/25/2013 12:49 PM, Adrian Gibanel wrote:
I've recently updated my
http://www.ovirt.org/User:Adrian15/oVirt_engine_migration oVirt engine
migration howto with the
http://www.ovirt.org/User:Adrian15/oVirt_engine_migration#Update_VdcBootStrapUrl
Update VdcBootStrapUrl section.
My next
On Fri 25 Jan 2013 05:23:24 PM CST, Royce Lv wrote:
I patched python source managers.py to retry recv() after EINTR,
supervdsm works well and the issue gone.
Even declared in python doc that:only the main thread can set a
new signal handler, and the main thread will be the only one to
- Mensaje original -
De: Oved Ourfalli ov...@redhat.com
Hey all,
We had an oVirt workshop this week, which included a few sessions
about the new oVirt UI Plugin framework, including a Hackaton and a
BOF session.
Was there finally any video recorded of this workshop?
If you find
- Mensaje original -
De: Juan Hernandez jhern...@redhat.com
Update VdcBootStrapUrl section.
But I don't like that currently you have to issue a database update
like this:
psql -c update vdc_options set option_value =
'http://new.manager.com:80/Components/vds/' where
On 01/25/2013 01:15 PM, Adrian Gibanel wrote:
*De: *Juan Hernandez jhern...@redhat.com
**
Update VdcBootStrapUrl section.
But I don't like that currently you have to issue a database
update like
Hi Mike
thanks for your reply.
I'm using oVirt Node Hypervisor release 2.5.5 (0.1.fc17).
On the latest kernel on fedora17 the problem is still here.
I've other issue on this node image, so I will test on fc18 and vdsm from
repo.
Kevin
2013/1/23 Mike Burns mbu...@redhat.com
On Wed,
On Thu, Jan 24, 2013 at 9:12 AM, Vadim Rozenfeld wrote:
On Wednesday, January 23, 2013 06:17:16 PM Gianluca Cecchi wrote:
Hello,
I have a WIn XP guest configured with one ide disk.
I would like to pass to virtio. Is it supported/usable for Win XP as a
disk type on oVirt?
What else are using
Hello,
I have a windows XP vm on f18 oVirt all-in-one and rpm from nightly
repo 3.2.0-1.20130123.git2ad65d0.
disk and nic are VirtIO.
When I run it normally (spice) I almost immediately get the icon to
open spice connection and the status of VM becomes Powering Up.
And in spice window I can see
On 24.01.2013 18:05, Patrick Hurrelmann wrote:
Hi list,
after rebooting one host (single host dc with local storage) the local
storage domain can't be attached again. The host was set to maintenance
mode and all running vms were shutdown prior the reboot.
Vdsm keeps logging the following
On Fri, Jan 25, 2013 at 6:10 PM, Gianluca Cecchi wrote:
When I select Run once it remains for about 10 minutes in executing
phase: see this image for timings comparison:
https://docs.google.com/file/d/0BwoPbcrMv8mvb3FIeHExVHFibms/edit
Sorry, I was not complete.
It happens only if I attach a
2013/1/25 Gianluca Cecchi gianluca.cec...@gmail.com:
On Fri, Jan 25, 2013 at 6:10 PM, Gianluca Cecchi wrote:
When I select Run once it remains for about 10 minutes in executing
phase: see this image for timings comparison:
https://docs.google.com/file/d/0BwoPbcrMv8mvb3FIeHExVHFibms/edit
On 24/01/2013 17:41, Oved Ourfalli wrote:
Hey all,
We had an oVirt workshop this week, which included a few sessions about the new
oVirt UI Plugin framework, including a Hackaton and a BOF session.
I've gathered some feedback we got from the different participants about the
framework, and
Hi,
I reproduced this issue, and I believe it's a python bug.
1. How to reproduce:
with the test case attached, put it under /usr/share/vdsm/tests/,
run #./run_tests.sh superVdsmTests.py
and this issue will be reproduced.
2.Log analyse:
We notice a strange pattern in this log:
On Thu, Jan 24, 2013 at 10:44:48AM -0500, Yeela Kaplan wrote:
Hi,
I've tested the new patch on fedora 18 vdsm host (created iscsi storage
domain, attached, activated) and it works well.
Even though multipath.conf no longer uses getuid_callout to recognize the
device's wwid,
it still knows
20 matches
Mail list logo