On Thu, Oct 3, 2013 at 12:21 AM, Gianluca Cecchi wrote:
On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote:
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to
spice
Every time I start the VM (that is defined with spice) it goes
On 10/03/2013 01:21 AM, Gianluca Cecchi wrote:
On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote:
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to
spice
Every time I start the VM (that is defined with spice) it goes in
and
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to spice
Every time I start the VM (that is defined with spice) it goes in
and this doesn't happen if the VM is defined with vnc?
paused state and after a few minutes the node
On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote:
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to
spice
Every time I start the VM (that is defined with spice) it goes in
and this doesn't happen if the VM is defined with
On 25.09.2013 09:11, Vijay Bellur wrote:
On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set volname server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on
On Thu, Sep 26, 2013 at 11:07 AM, David Riedl wrote:
g report on your issue.
https://bugzilla.redhat.com/show_bug.cgi?id=988299
Scroll to the end. ( https://bugzilla.redhat.com/show_bug.cgi?id=988299#c46
)
There is a modified glusterVolume.py. I have the same issue as well, I'm
trying to
I was able to restart engine and the two hosts.
All restarted again.
Now the effect to run the VM is that it remains in paused state
- start VM (about 21:54 today)
it starts and goes into paused mode (arrow icon near VM)
From image
On 09/25/2013 02:10 AM, Gianluca Cecchi wrote:
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated
Suggestion:
If page
So it seems the probelm is
On 09/25/2013 11:36 AM, Gianluca Cecchi wrote:
qemu-system-x86_64: -drive
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set volname server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
line:
option rpc-auth-allow-insecure on
On Wed, Sep 25, 2013 at 8:02 AM, Itamar Heim wrote:
Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is the reference, perhaps it would be better to explicitly specify
that one has to start the created volume before going to add a storage
domain based on the
On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set volname server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
line:
On 09/25/2013 04:40 AM, Gianluca Cecchi wrote:
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated
Suggestion:
If page
14 matches
Mail list logo