I am having some issues wrapping my head around this but what I am trying
to setup is a A/B testing environment with a 3node cluster. Each node has
2 nics, 1 for ovirtmgmt and 1 for vlaned A/B network. I guess what I am
trying to understand is if ovirt is tagging the vlan's I setup and is
Using Assaf's information I was able to accomplish my A/B network. I put a
quick write up about it here.
http://izen.ghostpeppersrus.com/setting-up-networks/
On Feb 6, 2014 3:50 AM, Assaf Muller amul...@redhat.com wrote:
- Original Message -
From: Pat Pierson ihasn2...@gmail.com
Looks like one of your peers is not connected anymore, depending on your
gluster setup this could be harmless so long as you replace it soon.
On Thu, Feb 27, 2014 at 1:54 AM, yfw...@daicy.net yfw...@daicy.net wrote:
hi,
it is my glusterfs log,
[2014-02-27 10:44:35.565367] I
I am running ovirt 3.3.2 and gluster 3.4 and recently had a pretty
catastrophic failure of my small 3 node cluster. Long story short I lost
the disk to a VM and decided to delete (start over with) now magically
re-appears as external-vmname. When I attempt to delete it again, it
deletes, but
like the qemu process is still running on your host.
can you see which host the VM is running on? can you try to log in the
host and check if there are any qemu processes running there.
regards,
Maor
On 06/17/2014 01:32 PM, Pat Pierson wrote:
I am running ovirt 3.3.2 and gluster 3.4
I am running ovirt 3.4.3 on a Fedora 19 manager and have 1 node running
Fedora 19 as well. I am attempting to get cloud-init to work on a CentOS
VM but I am running into issues. I can see where in the log it mounts
/dev/sr1 to /tmp/tmp_random_location and where it reads the meta-data.json
and
Shahar,
Thank you for your response. Version is cloud-init-0.7.4-2.el6.noarch
On Sun, Sep 14, 2014 at 3:12 AM, Shahar Havivi shah...@redhat.com wrote:
On 11.09.14 14:06, Pat Pierson wrote:
I am running ovirt 3.4.3 on a Fedora 19 manager and have 1 node running
Fedora 19 as well. I am
...@redhat.com wrote:
On 16.09.14 08:37, Pat Pierson wrote:
Shahar,
Thank you for your response. Version is
cloud-init-0.7.4-2.el6.noarch
On Sun, Sep 14, 2014 at 3:12 AM, Shahar Havivi shah...@redhat.com
wrote:
On 11.09.14 14:06, Pat Pierson wrote:
I am running ovirt 3.4.3
Agreed, thanks for the input.
On Tue, Sep 16, 2014 at 11:02 AM, Shahar Havivi shah...@redhat.com wrote:
On 16.09.14 10:41, Pat Pierson wrote:
I had a feeling you were going to ask that, just finished installing
fedora
19 from an iso image and tested. fedora vm's cloud-init is 0.7.2
I am attempting to use Snort as an IDS on my network. Currently I have all
traffic on my router uplink port mirrored to a port I have plugged into an
unused port on an oVirt node. I have created a network that only has
access to that port and assigned that network to my snort vm. I am able to
AM, Itamar Heim ih...@redhat.com wrote:
On 09/29/2014 04:24 PM, Antoni Segura Puimedon wrote:
- Original Message -
From: Pat Pierson ihasn2...@gmail.com
To: users@ovirt.org
Sent: Monday, September 29, 2014 3:07:53 PM
Subject: [ovirt-users] oVirt and Snort
I am attempting to use
Is there a way to pass oVirt user login details(name) to the vm in the form
of an environment variable to the vm? Would that be something Cloud-init
or Ovirt-guest-agent handles?
--
Patrick Pierson
___
Users mailing list
Users@ovirt.org
I am in the process of upgrading my cluster while at the same time moving
to gluster. My engine is version 3.3.1 and I have a NFS cluster running in
3.1 compatibility mode that I am moving to a 3.3 GlusterFS cluster. Host3
runs the engine and is on the NFS cluster while host2 is running a
host re-install before moving the host to
gluster-cluster, vdsm-gluster would have not installed.
Thanks,
Kanagaraj
On 12/12/2013 05:43 AM, Pat Pierson wrote:
I am in the process of upgrading my cluster while at the same time moving
to gluster. My engine is version 3.3.1 and I have a NFS
,
Kanagaraj
On 12/12/2013 05:28 PM, Pat Pierson wrote:
Kanagaraj,
Thank you for the response. vdsm-gluster-4.13.0-11.el6.noarch is
installed on the host that is currently on the gluster cluster as well as
the host I am trying to move to the gluster cluster. It is not however
you can set a static FQDN if you dont have a dns server in /etc/hosts,
however if you do this, set the same FQDNs on all hosts for each hosts
192.168.0.1 node1.test.com node1
192.168.0.2 node2.test.com node2
192.168.0.3 node3.test.com node3
use that fqdn for your engine/node
all machines.
On Fri, Jan 10, 2014 at 1:20 PM, David Li david...@sbcglobal.net wrote:
Is this the /etc/hosts file on the engine machine or the node machine?
--
*From:* Pat Pierson ihasn2...@gmail.com
*To:* David Li david...@sbcglobal.net
*Cc:* users
Nathan,
Did you find a work around for this? I am running into the same issue.
Is there a way to force vdsm to see gluster? Or a way to manually run the
search so I can see why it fails?
*
*nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580
|www.broadsoft.com
On Fri, Jun
I am having a very strange issue with ovirt 3.5.1 and gluster. I have a
gluster volume with 4 nodes. One node is specifically set as the node
hosting the gluster volume in my ovirt cluster however today it died. I
tried working around it my modifying the hostname in the entry to another
node
19 matches
Mail list logo