Hello - I am attempting to install ovirt hosted engine (engine running as a
VM in a cluster). I have configured 10 servers at a metal-as-a-service
provider. The server wiring has been configured to our specifications,
however, the MaaS provider requires bond0 to be setup previously and the
two
with absolutely zero issues
The +1 is knowing that if I upgrade ovirt on ovirt-node, I will (hopefully)
have a much lesser chance of breaking ovirt on an upgrade
On Tue, Feb 8, 2022 at 9:04 AM Charles Stellen wrote:
> Dear Ovirt Hackers,
>
> sorry: incidently send to de...@ovitr.org
>
>
rver responding with correct IP for both
WE STUCK THERE
WE TRYIED:
- no success to connect to terminal/vnc of running VM "HostedEngine" to
figure out the internal network issue
any suggestion howto "connect" into newly deployed UP and RUNNING
HostedEngine VM? to figure
I know the subject is a bit wordy but let me try to explain
I have a 4.1 installation that I am migrating to 4.4. I placed the storage
domain into maintenance and then rsync'ed it to the storage I am using for
4.4. In 4.4 I import the domain and it comes in fine, however, when trying
to import
The linked report was from me. The port that was failing for me was
originally 5900. I deleted the rule from firewalld but then ran the
installation again and it errored on 6900. I had to remove both. Gluster
was configured during the HCI method
*From:* Dominique Deschênes
*Sent:* Monday,
Yep! I thought the error was reporting as the port already configured
inside the seed engine and not on the actual host. I deleted the firewalld
6900 port addition and everything seems to be flowing through
On Wed, May 12, 2021 at 1:36 PM Patrick Lomakin
wrote:
> Hello. I know this error.
I also just upgraded to 4.6.6.1 and it is still occurring
On Wed, May 12, 2021 at 12:36 PM Charles Kozler
wrote:
> Hello -
>
> Deployed fresh ovirt node 4.4.6 and the only thing I did to the system was
> configure the NIC with nmtui
>
> During the gluster install the i
Hello -
Deployed fresh ovirt node 4.4.6 and the only thing I did to the system was
configure the NIC with nmtui
During the gluster install the install errored out with
gluster-deployment-1620832547044.log:failed: [n2] (item=5900/tcp) =>
{"ansible_loop_var": "item", "changed": false, "item":
the way.
Gratefully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about
rade --enablerepo="baseos" --enablerepo="appstream"
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus"
Cleaned and re-ran Ansible. Still receiving the same (below). As always, if
you or anyone else has any ideas for troubl
my storage/Gluster
network (no switch)? /etc/nsswitch.conf is set to file dns and pings all work,
but dig and does not for storage (I understand this is to be expected).
Again, as always, any pointers or wisdom is greatly appreciated. I am out of
ideas.
Thank you!
Charles
_
I will check ‘/var/log/gluster’. I had commented out the filter in
‘/etc/lvm/lvm.conf’ - if I don’t the creation of volume groups fails
because lvm drives are excluded by filter. Should I not be commenting it
out but modifying it in some way?
Thanks!
Charles
On Tue, Jan 12, 2021 at 12:11 AM
with oVirt Node v4.4
ISO. Ping from each host to the other two works for both mgmt and storage
networks. I am using DHCP for management network, hosts file for direct
connect storage network.
Thanks again for your help,
Charles
On Mon, Jan 11, 2021 at 10:03 PM Ritesh Chikatwar
wrote:
>
>
&g
volume tasks
Volume Name: vmstore
Type: Replicate
Volume ID: 27c8346c-0374-4108-a33a-0024007a9527
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1.domain.com:/gluster_bricks/vmstore/vmstore
Brick2: host2.domain.com:/gluster_bricks/vmst
ernel module kvdo not installed\nvdo: ERROR - modprobe: FATAL:
Module
kvdo not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64\n"
Any further suggestions are MOST appreciated.
Thank you and respectfully,
Charles
___
Users mailing lis
tus"
and "gluster peer probe" are successful?
Thanks again, I will update after rebuilding with oVirt Node v4.4.4
Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy St
fs.disable: on
performance.client-io-threads: on
###
[root@fmov1n1 conf.d]#
Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email t
020-12-18
19:50:53.767864", "item": {"arbiter": 0, "brick":
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:43.654661&
quot;, "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore",
"granular-entry-heal", "enable"], "delta": "0:00:10.113203", "end": "2020-12-18
19:50:53.767864", "ite
I have been asked if multipath has been disabled for the cluster's nvme drives.
I have not enabled or disabled multipath for the nvme drives. In Gluster
deploy Step 4 - Bricks I have checked "Multipath Configuration: Blacklist
Gluster Devices." I have not performed any custom setup of nvme
using
- pvcreate /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
--test
I have had to comment out the filter in /etc/lvm/lvm.conf or else all drives
are excluded by filter.
Thank you so very much for your response and any additional insight you may
have!
Respectfully,
Charles
iorno ven 11 dic 2020 alle ore 15:49 Charles Kozler <
> char...@fixflyer.com> ha scritto:
>
>> CentOS was the downstream of RHEL but has now become the upstream
>>
>> I guess oVirt was always downstream as well - yes?
>>
>
> No. oVirt is oVirt. It's dow
CentOS was the downstream of RHEL but has now become the upstream
I guess oVirt was always downstream as well - yes?
If so then yes, I can't see much changing in the ways of oVirt
On Fri, Dec 11, 2020 at 2:59 AM Sandro Bonazzola
wrote:
>
>
> Il giorno gio 10 dic 2020 alle ore 21:5
I guess this is probably a question for all current open source projects
that red hat runs but -
Does this mean oVirt will effectively become a rolling release type
situation as well?
How exactly is oVirt going to stay open source and stay in cadence with all
the other updates happening around
>
> > What do you mean by that? Usually (I am not a native English speaker),
> > "red herring" means, for me, "something that made me look at the error
> > not in the place where it actually occurred". In this case:
>
> Yep! The installer would die out after failing at the SSO step. In the log
>
I'd like to share this with the list because its something that I changed
for convenience, in .bashrc, but had a not so obvious rippling impact on
the ovirt self hosted installer. I could imagine a few others doing this
too and I'd rather save future them hours of google time
Every install failed
this fix?
Thank you very much,
Diamond Tours, Inc.
Charles Lam
13100 Westlinks Terrace, Suite 1, Fort Myers, FL 33913-8625
O: 239. 437.7117 | F: 239.790.1130 | Cell: 239.227.7474
c...@diamondtours.com<mailto:c...@diamondtours.com>
___
Users mailin
Thank you very much Vinícius for the clarification, I will try and report.
Sincerely,
Charles
On Tue, Feb 4, 2020 at 5:32 PM Vinícius Ferrão
wrote:
> Hi Charles, I never done it on Gluster. But my hosted engine runs on a
> VLAN too on separate network on top of a LACP bond with the ovi
Googling "ovirt windows agent" this is the first link that pops up:
http://www.ovirt.org/documentation/internal/guest-agent/guest-agent-for-windows/
This doc seems non-intuitive and over complicated
Specifically, the RedHat documentation that is 4 links below is simple as
"install this package
Complete shutdown of 4.3.3 cluster, change cluster from 4.2 to 4.3
compatibility and upgrade of all hosts to current 4.3.3 with patches has
seemingly fixed everything.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Configuration with versions
Next email will have log files.
2 sites
First site: Bayview
4 nodes BL460 gen9 with 4 x 10G nics
Node 1-3 have not been changed since 4.3.2 upgrade. These nodes have the
network sync issue and cannot migrate VMs.
OS Version:
RHEL - 7 - 6.1810.2.el7.centos
OS
Hi everyone,
We have had a pair of Ovirt clusters at work, starting with the 3.x. I
replaced last year with 4.x cluster on new machines. 4.2 worked great, but
when I upgraded to 4.3.2 and now .3 I immediately ran into host networking
issues resulting in hung or unable totally to migrate VMs.
Hello -
I am kicking around different ideas of how I could achieve this
I have been bitten, hard, with in place upgrades before so I am not really
wanting to do this unless its a complete last resort...and even then I'm
iffy
Really, the only configuration I have is 1 VDSM hook and about 20
n, I am not that knowledgeable, but I have not found the directions on
the oVirt website imaging Node to a USB stick to work for me, nor have I
had success (with Node only) with the usually great Rufus.
Sincerely,
Charles
On Mon, May 7, 2018 at 4:37 PM Abdelkarim ZANNI <ab.za...@numea.ma> wr
Joop <jvdw...@xs4all.nl>:
>
>> On 22-3-2018 10:17, Yaniv Kaul wrote:
>>
>>
>>
>> On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler < <ckozler...@gmail.com>
>> ckozler...@gmail.com> wrote:
>>
>>> Hi All -
>>>
>>> Rece
.nl> wrote:
> On 22-3-2018 10:17, Yaniv Kaul wrote:
>
>
>
> On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler < <ckozler...@gmail.com>
> ckozler...@gmail.com> wrote:
>
>> Hi All -
>>
>> Recently did this and thought it would be worth documenting.
Hi All -
Recently did this and thought it would be worth documenting. I couldnt find
any solid information on vsrx with kvm outside of flat KVM. This outlines
some of the things I hit along the way and how to fix. This is my one small
way of giving back to such an incredible open source tool
Did you setup fencing?
I've also seen this behavior with stressed CPU and NMI watch dog in BIOS
rebooting a server but that was on freebsd. Have not seen it on Linux
On Nov 25, 2017 2:07 PM, "Jonathan Baecker" wrote:
> Hello community,
>
> yesterday evening one of our nodes
You might be better off rebuilding. It shouldnt be that hard albeit
slightly time consuming. Is this a production environment or is it a
lab/test area?
Two possible ways for this, I think...
1.) Not sure if HE will use it but if you right click on the HostedEngine
VM in the web UI and click
hello,
What would be the process to go from current 4.1 to 4.2-pre?
I have been demoing 4.2 on my own lab environment and the WebUI, for my
user base, is entirely more intuitive than the previous
My 4.1 is relatively new so there isnt much config done to it, that said,
wondering what it would
Most answers reside in the agent.log under /var/log/hosted-engine-ha/. Tail
that for a bit and see what pops up. You can also review vdsm log under
/var/log
I've found that looking at these two logs and just trying a few things
yields results
On Sat, Oct 7, 2017 at 5:18 PM, Wesley Stewart
I believe you would accomplish this by setting a VM to be highly available
(like the engine). Then engine makes sure this VM is up on at least one
node through lease agreements (IIRC). In either case, I think this is what
you want
On Wed, Oct 4, 2017 at 10:25 AM, Chris Adams
I did a 3.6 to 4.1 like this. I moved all of my VMs to a new storage domain
(the other was hyperconverged gluster) and then took a full outage, shut
down all of my VMs, detached from 3.6, and imported on 4.1. I had no issues
other than expected mac address changes, but I think you can manually
Thank you for clearing this up for me everyone. My concern that something
like the export domain wasnt going to exist and it was just going to be
deprecated with no alternative. Glad to hear all the news of the SD
On Mon, Oct 2, 2017 at 8:31 AM, Pavel Gashev wrote:
> Maor,
>
>
Hello,
I recently read on this list from a redhat member that export domain is
either being deprecated or looking at being deprecated
To that end, can you share details? Can you share any notes/postings/bz's
that document this? I would imagine something like this would be discussed
in larger
gt;>>>> bha...@synergysystemsindia.com> wrote:
>>>>>
>>>>>> I am not using any DNS server. I have made entries in /etc/hosts for
>>>>>> all Nodes and for engine VM also.
>>>>>>
>>>>>> Regrards
>>&g
So I tried to set this up. I configured an untagged network and attached it
to all of my hosts. I setup a VM and correctly setup a trunk port but no
traffic is passed
Can anyone assist?
On Wed, Sep 20, 2017 at 9:09 AM, Charles Kozler <ckozler...@gmail.com>
wrote:
> Hello,
>
> I
https://www.ovirt.org/community/user-stories/users-and-providers/
BobCares is based out of India, so know that. They seemed responsive and
helpful and eager for business and generally knowledgeable in initial talks
I'd recommend you check out CornerStone. They are states local (I think NC
or SC)
Hello,
I have seen mixed results for this search on this list so I'd like to clear
it up
I have a VM that I need to configure with a trunk port so that all of my
VLANs can be configured on it but not as separate NICs (eth0, eth1, eth2,
etc)
I have seen on this list about a year ago someone said
Yedidyah - yes, update would be best of course. I've had this HIDS running
for a little over a year or so and never saw this before so was a little
weary
On Sun, Sep 17, 2017 at 3:00 AM, Yedidyah Bar David <d...@redhat.com> wrote:
> On Fri, Sep 15, 2017 at 5:12 PM, Charles Kozler
Thanks for confirming
On Sun, Sep 17, 2017 at 11:05 AM, Christopher Cox <c...@endlessnow.com>
wrote:
> On 09/17/2017 02:00 AM, Yedidyah Bar David wrote:
>
>> On Fri, Sep 15, 2017 at 5:12 PM, Charles Kozler <ckozler...@gmail.com>
>> wrote:
>>
>>> He
from boot I still get a brief hang on qxl but its about 5-10 seconds
then goes to login
Where can I follow up on reporting issues with glance images?
On Fri, Sep 15, 2017 at 11:57 PM, Charles Kozler <ckozler...@gmail.com>
wrote:
> I have tried this multiple times
>
> I imported glan
I have tried this multiple times
I imported glance image latest centos 7 as a template. Made a VM from it
and rebooted it multiple times testing some apps. Reboot was always 30
seconds or less
I did a full yum update and then reboot
on reboot I am hung up at "[drm] initalized qxl 0.1.0 20120117
, Charles Kozler <ckozler...@gmail.com>
wrote:
> I received an alert from OSSEC HIDS that a package was installed at 00:59.
> Nobody uses this infrastructure but me
>
> Upon investigation I find this
>
> Sep 14 00:59:18 ovirthost1 sshd[93263]: Accepted publickey for roo
Sorry, I meant "remove the static configuration of 1G full duplex and set
back to auto-negotiation to the 10G when ready"
On Thu, Sep 14, 2017 at 6:19 PM, Charles Kozler <ckozler...@gmail.com>
wrote:
> You could, I believe, turn off auto-negotiation and set it 1G full dup
You could, I believe, turn off auto-negotiation and set it 1G full duplex
on both sides and then add the links in and then remove the old 1G's when
ready then remove auto-negotiation from the 10G
But, it would ultimately be much, much easier to create a new LAG and then
take an outage on ovirt
I received an alert from OSSEC HIDS that a package was installed at 00:59.
Nobody uses this infrastructure but me
Upon investigation I find this
Sep 14 00:59:18 ovirthost1 sshd[93263]: Accepted publickey for root from
10.0.16.50 port 50197 ssh2: RSA
Hello -
I had setup my server somewhat before installing ovirt. I left em1 alone
for the ovirtmgmt but I had bonded two NICs on a separate card and named it
storage0 and storage1 accordingly.
Everything setup fine and when I went to add more networks for em3+em4 in
LACP named bond0, the web UI
p 12, 2017 at 7:04 PM, Charles Kozler <ckozler...@gmail.com>
wrote:
> Hey All -
>
> So I havent tested this yet but what I do know is that I did setup
> backupvol option when I added the data gluster volume, however, mount
> options on mount -l do not show it as being used
>
&
da
> like the parity drive for a RAID-4 array). I was also under the impression
> that one replica + Arbitrator is enough to keep the array online and
> functional.
>
> --Jim
>
> On Fri, Sep 1, 2017 at 5:22 AM, Charles Kozler <ckozler...@gmail.com>
> wrote:
>
&g
OVS in
> oVirt.
>
> I noticed on your deployment machines 2 and 3 have the same IP. Might want
> to fix that before deploying
>
> Happy trails
> ~D
>
>
> On Tue, Sep 12, 2017 at 2:00 PM, Tailor, Bharat <
> bha...@synergysystemsindia.com> wrote:
>
>> Hi C
Interestingly enough I literally just went through this same thing with a
slight variation.
Note to the below: I am not sure if this would be considerd best practice
or good for something long term support but I made due with what I had
I had 10Gb cards for my storage network but no 10Gb switch,
at 7:38 PM, Charles Kozler <ckozler...@gmail.com> wrote:
> Jim -
>
> One thing I noticed is that, by accident, I used
> 'backupvolfile-server=node2:node3' which is apparently a supported
> setting. It would appear, by reading the man page of mount.glusterfs, the
> synt
, will follow up with that test
On Fri, Sep 1, 2017 at 7:20 PM, Charles Kozler <ckozler...@gmail.com> wrote:
> Jim -
>
> here is my test:
>
> - All VM's on node2: hosted engine and 1 test VM
> - Test VM on gluster storage domain (with mount options set)
> - hosted e
install time).
>
> The "used managed gluster" checkbox is NOT checked, and if I check it and
> save settings, next time I go in it is not checked.
>
> --Jim
>
> On Fri, Sep 1, 2017 at 2:08 PM, Charles Kozler <ckozler...@gmail.com>
> wrote:
>
>> @ Jim -
o I'm still not opposed to making it a
> full replica.
>
> Did I miss something here?
>
> Thanks!
>
> On Fri, Sep 1, 2017 at 11:59 AM, Charles Kozler <ckozler...@gmail.com>
> wrote:
>
>> These can get a little confusing but this explains it best:
>> https://gluster
k-read: off
>> performance.readdir-ahead: on
>> server.allow-insecure: on
>> [root@ovirt1 ~]#
>>
>>
>> all 3 of my brick nodes ARE also members of the virtualization cluster
>> (including ovirt3). How can I convert it into a full replica instead of
>&g
s enough to keep the array online and
> functional.
>
> --Jim
>
> On Fri, Sep 1, 2017 at 5:22 AM, Charles Kozler <ckozler...@gmail.com>
> wrote:
>
>> @ Jim - you have only two data volumes and lost quorum. Arbitrator only
>> stores metadata, no actual files. So ye
a set / brick
detection.
I will test and let you know
Thanks!
On Fri, Sep 1, 2017 at 8:52 AM, Kasturi Narra <kna...@redhat.com> wrote:
> Hi Charles,
>
> One question, while configuring a storage domain you are saying
> "host to use: " node1, then in the conne
.12:192.168.8.13
>>
>> I had an issue today where 192.168.8.11 went down. ALL VMs immediately
>> paused, including the engine (all VMs were running on host2:192.168.8.12).
>> I couldn't get any gluster stuff working until host1 (192.168.8.11) was
>> restored.
>>
>&g
hu, Aug 31, 2017 at 3:30 PM, Charles Kozler <ckozler...@gmail.com>
wrote:
> So I've tested this today and I failed a node. Specifically, I setup a
> glusterfs domain and selected "host to use: node1". Set it up and then
> failed that VM
>
> However, this did not wor
es in all the nodes .
>
> [1] 'mnt_options=backup-volfile-servers=:'
>
> On Thu, Aug 31, 2017 at 5:54 PM, Charles Kozler <ckozler...@gmail.com>
> wrote:
>
>> Hi Kasturi -
>>
>> Thanks for feedback
>>
>> > If cockpit+gdeploy plugin would be hav
t of ovirt is. It would be helpful to
have some checks along the way for this condition if its a blocker for
functions
On Thu, Aug 31, 2017 at 9:09 AM, Charles Kozler <ckozler...@gmail.com>
wrote:
> Hello,
>
> I recently installed ovirt cluster on 3 nodes and saw that I cou
Hello,
I recently installed ovirt cluster on 3 nodes and saw that I could only
migrate one way
Reviewing the logs I found this
2017-08-31 09:04:30,685-0400 ERROR (migsrc/1eca84bd) [virt.vm]
(vmId='1eca84bd-2796-469d-a071-6ba2b21d82f4') unsupported configuration:
Unable to find security driver
meter called backup-volfile-servers="h1:h2" and if one
> of the gluster node goes down engine uses this parameter to provide ha /
> failover.
>
> Hope this helps !!
>
> Thanks
> kasturi
>
>
>
> On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler <ckozler...@gma
Hello -
I have successfully created a hyperconverged hosted engine setup consisting
of 3 nodes - 2 for VM's and the third purely for storage. I manually
configured it all, did not use ovirt node or anything. Built the gluster
volumes myself
However, I noticed that when setting up the hosted
file a bug.
One last question: Data for the Storage section of the Global Utilization part
of the dashboard is empty. We are using Ceph via Cinder for our storage. Is
that the issue?
Side note: we are now being bitten by this bug -
https://bugzilla.redhat.com/show_bug.cgi?id=1465825
Than
ches
have not provided me with any further insight. Has anyone else experienced
this issue? Has anyone else resolved this issue? What should I be looking at
to start debugging this issue (a section of the database perhaps...)?
Thanks,
Charles
___
Use
tell me what is the exact version of the engine ?
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793
On Thu, May 11, 2017 at 10:29 PM, Charles Tassell
<char...@islandadmin.ca <mailto:char...@islandadmin.ca>> wrote:
Sure, it's pretty big so I've put it online for download
t;> expert in this area.
>>
>
> Why is an export domain better than detach and attach a storage domain?
> Y.
>
>
>>
>> On Wed, May 31, 2017 at 4:08 PM, Charles Kozler <ckozler...@gmail.com>
>> wrote:
>>
>>> I couldnt find a definitive o
I couldnt find a definitive on this so I would like to inquire here
I have gluster on my storage backend exporting the volume from a single
node via NFS
I have a DC of 4.0 and I would like to upgrade to 4.1. I would ideally like
to take one node out of the cluster and build a 4.1 datacenter.
are you using ?
On Thu, May 11, 2017 at 9:30 PM, Charles Tassell
<char...@islandadmin.ca <mailto:char...@islandadmin.ca>> wrote:
Just as an update, I created a new VM and had the same issue: the
disk remains locked. So I then added a new data store (this one
iSCSI not NFS
[1395]: s5 host 2 1 2110065
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
On 2017-05-11 10:09 AM, Charles Tassell wrote:
Hi Freddy,
Sure, thanks for looking into this. Here you go:
2017-05-10 11:35:30,249-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer
,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell
<char...@islandadmin.ca <mailto:char...@islandadmin.ca>> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to
latest release as of yesterday
Hi Martin,
oVirt 4.1 (and maybe earlier versions that I just didn't notice) use
policy based routing. You can see these routes by typing "ip rule show"
on one of the hosts.
In situations such as yours, where you are connected to multiple
networks and need to specify which one has the
ID. If you are
going through a switch then the switch will have to be setup to handle
the VLANs.
On 2017-04-20 10:07 PM, Linov Suresh wrote:
Hi Charles,
We use an Ethernet cable to connect between the hosts through 10G port.
We want to use the guest VM's hosted on different hosts 10G network
Hi Suresh,
You would need to connect the two OVN instances somehow. If it's
just two single hosts, I think the easiest way would be to create a VPN
connection between the two hosts with OpenVPN or the like and then add
the tun/tap interfaces into the OVN on each box. You might run into
I think he means SPM? I've seen when node with SPM goes down it isn't
really seamless and takes a minute or two to catch up unless of course you
out that node in maintenance mode but that isn't possible if it crashes or
something
On Apr 15, 2017 1:38 PM, "FERNANDO FREDIANI"
charset=windows-1252; format=flowed
On 4/10/2017 6:59 AM, Charles Tassell wrote:
Ah, spoke to soon. 30 seconds later the network went down with IPv6
disabled. So it does appear to be a host forwarding problem, not a VM
problem. I have an oVirt 4.0 cluster on the same network that doesn't
ha
. The eth1
NIC disappearing is still worrisome though.
On 2017-04-10 07:13 AM, Charles Tassell wrote:
Hi Everyone,
Thanks for the help, answers below.
On 2017-04-10 05:27 AM, Sandro Bonazzola wrote:
Adding Simone and Martin, replying inline.
On Mon, Apr 10, 2017 at 10:16 AM, Ondrej Svoboda
Hi Everyone,
Thanks for the help, answers below.
On 2017-04-10 05:27 AM, Sandro Bonazzola wrote:
Adding Simone and Martin, replying inline.
On Mon, Apr 10, 2017 at 10:16 AM, Ondrej Svoboda <osvob...@redhat.com
<mailto:osvob...@redhat.com>> wrote:
Hello Charles,
Fi
Hi Everyone,
Okay, I'm again having problems with getting basic networking setup
with oVirt 4.1 Here is my situation. I have two servers I want to use
to create an oVirt cluster, with two different networks. My "public"
network is a 1G link on device em1 connected to my Internet feed,
Hey Kai,
Go into the Hosts tab, click on the host you want to add the bonded
interface to then on the Network Interfaces tab in the bottom pane.
Click on Setup Host Networks and drag an unused network adaptor over the
one that you want to bond it with. A box will come up and let you
cluster I
get "Cannot setup Networks. Operation can be performed only when Host
status is Maintenance, Up, NonOperational."
On 2017-04-05 10:00 AM, Petr Horacek wrote:
Hello Charles,
I think you can get your desired network with oVirt.
Create second network external_network,
Hi Guys,
I'm wondering, is it possible to override VDSM and setup my network
interfaces manually? I've got two Dell servers with dual-10G networking
that I want to use bonded (LACP) for ovirtmgmt and then some 1G
interfaces that I want to use for the VM network/Internet connection.
I've
ntation/en-us/red_hat_virtualization/4.1-beta/html/self-hosted_engine_guide/>
On 30 March 2017 at 18:17, Charles Tassell <char...@islandadmin.ca
<mailto:char...@islandadmin.ca>> wrote:
Hello,
Are there any more recent install docs than what's on the
website? Th
Hello,
Are there any more recent install docs than what's on the website?
Those all seem to be back from the 3.x days and don't really deal with
the modern setup of using a hosted engine.
More specifically, I've noticed that when deploying a hosted engine I
can't import the storage
ept target with multi IP addresses.
2017-03-26 9:40 GMT+02:00 Yaniv Kaul <yk...@redhat.com
<mailto:yk...@redhat.com>>:
On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell
<ctass...@gmail.com <mailto:ctass...@gmail.com>> wrote:
Hi Everyone,
I
Hi Everyone,
I'm about to setup an oVirt cluster with two hosts hitting a Linux
storage server. Since the Linux box can provide the storage in pretty
much any form, I'm wondering which option is "best." Our primary focus
is on reliability, with performance being a close second. Since we
Well yes I am sure about the research I did :-)
However, to your point, I didnt actually consider that and of course now
clearly makes the most sense. Thanks!
On Wed, Mar 22, 2017 at 9:57 AM, Yedidyah Bar David <d...@redhat.com> wrote:
> On Wed, Mar 22, 2017 at 3:43 PM, Charles Kozler
1 - 100 of 189 matches
Mail list logo