On 09/25/2013 02:10 AM, Gianluca Cecchi wrote:
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated
Suggestion:
If page
So it seems the probelm is
On 09/25/2013 11:36 AM, Gianluca Cecchi wrote:
qemu-system-x86_64: -drive
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set volname server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
line:
option rpc-auth-allow-insecure on
On Wed, Sep 25, 2013 at 8:02 AM, Itamar Heim wrote:
Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is the reference, perhaps it would be better to explicitly specify
that one has to start the created volume before going to add a storage
domain based on the
On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set volname server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
line:
if the domain no longer exists than the problem is not in the host at all.
your problem is that you have db entries and when you try to remove them
a command is sent to the vdsm (since engine has no idea you removed the
domain) and you will fail to remove it since vdsm is sending an error.
No, it doesn't succeed neither.
I've also noticed that in the Gluster Swift secction it appears to me
everything as Not installed, except the memcached that appears as Down,
could this has something to do with all these?
The private chat was my bad I think, I've only answered to you in some
On 09/24/2013 08:24 PM, Itamar Heim wrote:
On 09/24/2013 06:06 PM, Jason Brooks wrote:
On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:
Dear ovirt users.
I'm trying to setup an oVirt 3.3 installation using an already existing
OpenStack glance service as an external provider.
When
Hello,
Was wondering if anyone can shed a light on this subject as I could not find
any documentation specific to this. I am currently running an ovirt 3.2 cluster
with 5 hosts with local storage. I have a dedicated box running ovirt engine. I
would like to have the ability to run an
There are plenty of options:
1. Run the engine as a VM under a local libvirt, and cluster the libvirt VM
as a protected service using RHCS (well tested and documented)
2. Use the self hosted engine (recently announced, coming up tech)
3. Use any other clustering technology you like best, an engine
Those are good options, however since I am not running centralized storage, all
my hosts are in their own data center with local storage options only. Thats
why I have two standalone machines to run ovirt on. Im guessing this is not
recommended? If not is there away to allow local storage and
Yeah, that was along the lines of what I was thinking. Just wanted to make sure
there wasnt an official way to do it as well.
Thanks!
Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.com
Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
The official way is to have a SAN or NFS in place, so you can allocate a
LUN for the engine. Multiple hosts/DCs with local storage are quite the
corner case IMO.
On Wed, Sep 25, 2013 at 2:31 PM, Doug Bishop dbis...@controlscan.comwrote:
Yeah, that was along the lines of what I was thinking.
Yes. However we are using it for a dev / qa environments in house. We plan to
roll rhev into production soon and will be using backend storage.
Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.com
Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
Sounds like a good, solid plan to me
On Wed, Sep 25, 2013 at 2:34 PM, Doug Bishop dbis...@controlscan.comwrote:
Yes. However we are using it for a dev / qa environments in house. We plan
to roll rhev into production soon and will be using backend storage.
Doug Bishop
Sr. Systems Engineer
Hi,
I'm not being able to import a VM from ESXI:
[root@ovirt-mgmt ~]# LIBGUESTFS_TRACE=1 LIBGUESTFS_DEBUG=1 virt-v2v -ic
esx://10.11.12.123/?no_verify=1 -o rhev -os
10.11.12.222:/var/lib/exports/export
--network ovirtmgt MultiDektec
MultiDektec_MultiDektec: 100%
On Tue, Sep 24, 2013 at 12:04:14PM -0400, Federico Simoncelli wrote:
- Original Message -
From: Dan Kenigsberg dan...@redhat.com
To: Dead Horse deadhorseconsult...@gmail.com
Cc: users@ovirt.org users@ovirt.org, vdsm-de...@fedorahosted.org,
fsimo...@redhat.com, aba...@redhat.com
I noticed that too, I wasn't sure if it was a bug or just how I had setup
my NFS share..
There were three steps I did to remove the disk images, I'm sure there's a
100% easier solution..:
I found the easiest way (graphically) was go to your
https://ovirtengine/api/disks and so a search for the
19 matches
Mail list logo