On Wed, Sep 25, 2013 at 8:02 AM, Itamar Heim wrote:
>> Suggestion:
>> If page
>> http://www.ovirt.org/Features/GlusterFS_Storage_Domain
>> is the reference, perhaps it would be better to explicitly specify
>> that one has to start the created volume before going to add a storage
>> domain based o
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
>
>
> Have the following configuration changes been done?
>
> 1) gluster volume set server.allow-insecure on
>
> 2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
> line:
> option rpc-auth-allow-insecure on
>
On 09/25/2013 11:36 AM, Gianluca Cecchi wrote:
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d
So it seems the probelm is
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=
oVirt hosts are VMs inside an ESX 5.1 infra.
I think all is ok in terms of nested virtualization though
CPU of ESX host is E7-4870 and cluster defined as "Intel Nehalem Family"
selinux is in permissive mode
[root@ovnode01 libvirt]# vdsClient -s localhost getVdsCapabilities
HBAInventory =
On 09/25/2013 02:10 AM, Gianluca Cecchi wrote:
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated
Suggestion:
If page
http://w
On 09/25/2013 04:40 AM, Gianluca Cecchi wrote:
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated
Suggestion:
If page
http://w
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated
Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is
> > Date: Tue, 24 Sep 2013 13:28:26 +0100
> > From: dan...@redhat.com
> > To: cybertimber2...@hotmail.com
> > CC: jbro...@redhat.com; masa...@redhat.com; alo...@redhat.com;
> > users@ovirt.org
> > Subject: Re: [Users] Unable to finish AIO 3.3.0 - VDSM
> >
>
>
> >
> > Here, Vdsm is trying to co
On Tue, Sep 24, 2013 at 02:41:58PM -0300, emi...@gmail.com wrote:
> Thanks for your answer Dan!
>
> Yesterday was talking with an user in the IRC and gave me the hint to
> upgrade the libvirt to the 1.1.2 after trying in his implementation the
> live migration successfully.
>
> I've upgraded the
I think I found it, but I don't know how to remove:
/sbin/lvm vgs --config " devices { preferred_names = [\"^/dev/mapper/\"]
ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [
\"a%36000eb396eb9c0540033|3600508b1001c80dabd7195030a341559%\",
\"r%.*
On 09/24/2013 10:18 PM, H. Haven Liu wrote:
Apparently I needed to logout and log back in? Because after that, the "edit"
button is no longer grayed out!
einav - thoughts?
On Sep 24, 2013, at 12:16 PM, "H. Haven Liu" wrote:
I reinstalled the hosts, and changed DC to 3.3.
Both DC and Clu
Apparently I needed to logout and log back in? Because after that, the "edit"
button is no longer grayed out!
On Sep 24, 2013, at 12:16 PM, "H. Haven Liu" wrote:
> I reinstalled the hosts, and changed DC to 3.3.
>
> Both DC and Cluster are reporting "Compatibility Version" of 3.3
>
> On Sep 2
I reinstalled the hosts, and changed DC to 3.3.
Both DC and Cluster are reporting "Compatibility Version" of 3.3
On Sep 24, 2013, at 11:26 AM, Itamar Heim wrote:
> On 09/24/2013 07:15 PM, H. Haven Liu wrote:
>> Hello,
>>
>> I upgraded our installation of oVirt from 3.2 to 3.3, and one of the
On 09/24/2013 07:15 PM, H. Haven Liu wrote:
Hello,
I upgraded our installation of oVirt from 3.2 to 3.3, and one of the features I was looking forward to was the ability to resize
VM disk. However, it appears that the feature is still not available to me. I selected the "Virtual Machines" tab,
On 09/24/2013 06:06 PM, Jason Brooks wrote:
On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:
Dear ovirt users.
I'm trying to setup an oVirt 3.3 installation using an already existing
OpenStack glance service as an external provider.
When I define the external provider, I put:
Opensta
This storage domains don't exist anymore. There is an entry in postgres
with:
"Domain VMExport was forcibly removed by admin@internal"
It was a NFS Export domain.
Is there any chance it is causing problem with iscsi data domain
operations? Now I can create disks and VMs, but I can't remove th
> Date: Tue, 24 Sep 2013 13:28:26 +0100
> From: dan...@redhat.com
> To: cybertimber2...@hotmail.com
> CC: jbro...@redhat.com; masa...@redhat.com; alo...@redhat.com; users@ovirt.org
> Subject: Re: [Users] Unable to finish AIO 3.3.0 - VDSM
>
> On Mon, Sep 23, 2013 at 06:10:09PM -0400, Nicholas
Hello,
I upgraded our installation of oVirt from 3.2 to 3.3, and one of the features I
was looking forward to was the ability to resize VM disk. However, it appears
that the feature is still not available to me. I selected the "Virtual
Machines" tab, selected a VM, selected the "Disks" sub-tab,
vdsm cannot find your storage.
check your storage and network connection to it.
On 09/24/2013 03:31 PM, Eduardo Ramos wrote:
Hi all!
I'm getting a strange error in on my SPM:
Message from syslogd@darwin at Sep 24 11:19:58 ...
�<11>vdsm Storage.DomainMonitorThread ERROR Error while collecting
- Original Message -
> From: "Dan Kenigsberg"
> To: "Dead Horse"
> Cc: "" , vdsm-de...@fedorahosted.org,
> fsimo...@redhat.com, aba...@redhat.com
> Sent: Tuesday, September 24, 2013 11:44:48 AM
> Subject: Re: [Users] vdsm live migration errors in latest master
>
> On Mon, Sep 23, 2013 a
On 09/24/2013 05:06 PM, Jason Brooks wrote:
> On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:
>> Dear ovirt users.
>> I'm trying to setup an oVirt 3.3 installation using an already existing
>> OpenStack glance service as an external provider.
>> When I define the external provider, I pu
On 09/24/2013 04:56 PM, Gianluca Cecchi wrote:
> On Tue, Sep 24, 2013 at 4:15 PM, Riccardo Brunetti wrote:
>> Dear ovirt users.
>> I'm trying to setup an oVirt 3.3 installation using an already existing
>> OpenStack glance service as an external provider.
>> When I define the external provider, I
On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:
> Dear ovirt users.
> I'm trying to setup an oVirt 3.3 installation using an already existing
> OpenStack glance service as an external provider.
> When I define the external provider, I put:
>
> Openstack Image as "Type"
> the glance ser
On Tue, Sep 24, 2013 at 4:15 PM, Riccardo Brunetti wrote:
> Dear ovirt users.
> I'm trying to setup an oVirt 3.3 installation using an already existing
> OpenStack glance service as an external provider.
> When I define the external provider, I put:
>
> Openstack Image as "Type"
> the glance servi
On 09/24/13 19:57, René Koch (ovido) wrote:
> On Tue, 2013-09-24 at 08:44 +0800, lofyer wrote:
>> On 2013/9/24 6:03, Itamar Heim wrote:
>>> On 09/23/2013 06:18 PM, lofyer wrote:
Besides assigning a watchdog device to it, are there any other ways to
make the VM autostart even if an user sh
Hi all!
I'm getting a strange error in on my SPM:
Message from syslogd@darwin at Sep 24 11:19:58 ...
?<11>vdsm Storage.DomainMonitorThread ERROR Error while collecting
domain 0226b818-59a6-41bc-8590-91f520aa7859 monitoring
information#012Traceback (most recent call last):#012 File
"/usr/shar
Dear ovirt users.
I'm trying to setup an oVirt 3.3 installation using an already existing
OpenStack glance service as an external provider.
When I define the external provider, I put:
Openstack Image as "Type"
the glance service endpoint as "URL" (ie. http://xx.xx.xx.xx:9292) I
used the openstack
Minutes:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-24-13.01.html
Minutes (text):
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-24-13.01.txt
Log:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-24-13.01.log.html
#ovirt: oVirt Weekly Meeting
On Tue, 2013-09-24 at 08:44 +0800, lofyer wrote:
> On 2013/9/24 6:03, Itamar Heim wrote:
> > On 09/23/2013 06:18 PM, lofyer wrote:
> >> Besides assigning a watchdog device to it, are there any other ways to
> >> make the VM autostart even if an user shutdown it manually?
> >> _
On Mon, Sep 23, 2013 at 12:15:04PM -0300, emi...@gmail.com wrote:
> Hi,
>
> I'm running ovirt-engine 3.3 in a server with fedora 19, also two host with
> fedora 19 running vdsm and gluster. I'm using the repositories like it
> say's here: http://www.ovirt.org/OVirt_3.3_TestDay with enable the
> [o
On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
> Seeing failed live migrations and these errors in the vdsm logs with latest
> VDSM/Engine master.
> Hosts are EL6.4
Thanks for posting this report.
The log is from the source of migration, right?
Could you trace the history of the host
Actually this is a bug [1].
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1005562
On 09/24/2013 09:53 AM, lof yer wrote:
> That's fine, I thought there was an option in api that make the start as
> normal one.
>
> 2013/9/24 Itamar Heim mailto:ih...@redhat.com>>
>
> On 09/24/2013 06:34 A
33 matches
Mail list logo