I am currently in the process of researching converting an existing SMB
infra to virtual. Ovirt/RHEV is a strong contender and checks off a lot
of boxes on our list. GlusterFS is appealing but I am finding it very
difficult to find any answers or stats/numbers regarding how well it can
perform
You should be able to do this without having to shut vm down
On Sun, May 13, 2018, 7:04 AM Alex Bartonek, wrote:
> Just trying to make sure there isnt an easier way to do what I'm doing.
> I dont have an HA environment in my homelab. I have 3 VM's and I wanted
> to copy them
afaik you should have at minimum 3 hosts. Above that you can have any
amount you want but I believe you need to have multiples of 3 gluster bricks
On Mon, May 21, 2018 at 10:57 AM, wrote:
> Is 3 the max number of hosts?
>
>
> On 2018-05-20 22:45,
I'm about to build a 3 host Ovirt hyperconverged cluster and was wondering
if it's recommended to use straight up Centos or would Ovirt node be a
better choice?
Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Put cluster in global maintenance. Update hosted engine to 4.2. Once it
comes back up put one of your hosts in maintenance mode, upgrade it then
set back to active. Do this for each host until you are done.
On Mon, Jun 4, 2018 at 9:09 AM, Arman Khalatyan wrote:
> Hello everybody,
>
> I
Try using the virsh command
On Tue, May 29, 2018 at 11:08 AM, David David wrote:
> hi.
>
> How to start VM directly in hypervisor? ovirt-engine was crashed.
>
> VM's id is known.
>
> vdsclient VM, hasn't argumets for starting vm.
>
> Thanks.
>
> ___
>
finished racking the servers and haven't had a chance to start the Ovirt
install yet but hope to get to it within the next couple of weeks.. I've
been a bit worried about the performance of gluster, I'm hoping I won't be
disappointed.
Jayme
On Sat, Jun 23, 2018, 10:45 AM , wrote:
> I have deplo
I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared storage.
The shared storage is mounted from one of the hosts.
I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
update then engine setup which seemed to complete successfully, at the end
it powered down the
her platforms (sans the GUI of
> course).
>
>
>
> On Thu, Jan 18, 2018 at 9:59 AM, Jayme <jay...@gmail.com> wrote:
>
>> I've been running a non-production oVirt setup for a while and plan on
>> building a more robust oVirt setup for eventual production use. Part of
>
I've been running a non-production oVirt setup for a while and plan on
building a more robust oVirt setup for eventual production use. Part of
that planning of course is backup/disaster recovery options.
I've been playing around with a few options to backup oVirt, I'm sure most
of you are aware
For rackmount Dell R710's are fairly popular for home labs, they have good
specs and can be found at reasonable prices on ebay.
On Thu, Jan 18, 2018 at 4:52 PM, Abdurrahman A. Ibrahim <
a.rahman.at...@gmail.com> wrote:
> Hello,
>
> I am planning to buy home lab hardware to be used by oVirt.
>
>
First, apologies for all the posts to this list lately, I've been having a
heck of a time after 4.2 upgrade and you've been helpful, I appreciate
that.
Since 4.2 upgrade I'm experiencing a few problems that I'm trying to debug.
Current status is engine and all hosts are upgraded to 4.2, and
I am attempting to narrow down choices for storage in a new oVirt build
that will eventually be used for a mix of dev and production servers.
My current space usage excluding backups sits at about only 1TB so I figure
3-5 TB would be more than enough for VM storage only + some room to grow.
There
I've been considering hyperconverged oVirt setup VS san/nas but I wonder
how the meltdown patches have affected glusterFS performance since it is
CPU intensive. Has anyone who has applied recent kernel updates noticed a
performance drop with glusterFS?
Our oVirt environment was originally setup by someone else. The hosted
engine VM has a custom name, but it seems to me like some of the
hosted-engine tools such as hosted-engine --console for example expect the
domain to be "HostedEngine'. I tried renaming it in the admin interface
but changes
ah Bar David <d...@redhat.com>
> wrote:
>
>> On Sun, Jan 14, 2018 at 3:46 PM, Yedidyah Bar David <d...@redhat.com>
>> wrote:
>> > On Sun, Jan 14, 2018 at 3:37 PM, Jayme <jay...@gmail.com> wrote:
>> >> First, apologies for all the posts to this l
more
On Sun, Jan 14, 2018 at 2:09 PM, Jayme <jay...@gmail.com> wrote:
> I managed to fix the error with HA broker and agent continually crashing.
> I found that it was not a permissions problem on the path mentioned in the
> log:
>
>
> On Sun, Jan 14, 2018 at 2:07 P
Please help, I'm really not sure what else to try at this point. Thank you
for reading!
I'm still working on trying to get my hosted engine running after a botched
upgrade to 4.2. Storage is NFS mounted from within one of the hosts. Right
now I have 3 centos7 hosts that are fully updated with
;, line 3500, in
getIoTuneResponse#012res = self._dom.blockIoTune(#012 File
"/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in
__getattr__#012% self.vmid)#012NotConnectedError: VM
'4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or was undefined
On
recently upgraded to 4.2 and had some problems with engine vm running, got
that cleared up now my only remaining issue is that now it seems
ovirt-ha-broker and ovirt-ha-agent are continually crashing on all three of
my hosts. Everything is up and working fine otherwise, all VMs running and
hosted
. The hosted engine had updates as well as a full and complete
engine-setup, but did not return after being shut down. There must be some
way I can get the engine running again? Please
On Thu, Jan 11, 2018 at 8:24 AM, Jayme <jay...@gmail.com> wrote:
> The hosts have all ready been full
omorrow or
> this weekend and see what I get in just basic writing MB/s and let you know.
>
>
>
> Regards
>
> Bill
>
>
>
>
>
> *From:* Jayme
> *Sent:* Thursday, August 2, 2018 8:12 AM
> *To:* users
> *Subject:* [ovirt-users] Tuning and testing GlusterFS pe
I got it working, it turns out that IPMI over lan was not enabled in the
iDrac network settings.
Thanks!
On Wed, Aug 1, 2018 at 10:52 AM, Nicolas Ecarnot
wrote:
> Le 01/08/2018 à 15:28, Jayme a écrit :
>
>> I just enabled power management/fencing successfully on two of my ho
I just enabled power management/fencing successfully on two of my hosts
(Dell poweredge R720s with Idrac 7) but am failing to add the third. I
enter the IP and user/pass like the others, it takes 15 seconds or so they
spits out "Test Failed: Internal JSON-RPC error"
I tried resetting the IDRAC
he console icon.
>
>
>
> *From:* Jayme
> *Sent:* jeudi 2 août 2018 01:02
> *To:* users
> *Subject:* [ovirt-users] Possible to use Console via Mac web browser?
>
>
>
> I recall in the past (on a bit older of an oVirt build) I was able to
> launch VM consoles from my web
er this afternoon, will let you know any findings.
>
>
>
> Regards
>
> Bill
>
>
>
>
>
> *From:* Jayme
> *Sent:* Thursday, August 2, 2018 11:07 AM
> *To:* William Dossett
> *Cc:* users
> *Subject:* Re: [ovirt-users] Tuning and testing GlusterFS perfo
int to suggest that for best performance select it for any volume that is
going to be a data volume for VMs.
I simply installed using the latest node ISO / default cockpit deployment.
Hope this helps!
- Jayme
On Fri, Aug 3, 2018 at 5:15 AM, Sahina Bose wrote:
>
>
> On Fri, Aug 3, 201
Latest version of oVirt node 4.2 installed on three hosts. I completed
successfully the cockpit gdeploy process to deploy HCI. All of that went
well with no errors. I then proceeded to the hosted engine deployment step
which eventually failed (log attached).
This is the current status:
--==
> On Mon, Jul 30, 2018 at 6:46 PM, Jayme wrote:
>
>> It's nice to know someone else out there has these questions as well.
>> What I'd really like confirmation on is basically this: In a JBOD oVirt
>> HCI configuration with multiple disk devices per host node is it
.
- Jayme
On Mon, Jul 30, 2018 at 11:08 AM, femi adegoke
wrote:
> @jayme
> @william.dossett
>
> When you say "JBOD", are these hosts with xx number of disks or hosts with
> a physically attached JBOD?
> ___
> Users
Correct no raid on the two 2tb SSDs I plan on using replica three HCI setup
with no arbiter so each host will have a copy of data
On Mon, Jul 30, 2018, 11:39 AM femi adegoke,
wrote:
> Thanks Jayme for replying.
>
> In your case, there is no RAID on your 2 x
It's nice to know someone else out there has these questions as well. What
I'd really like confirmation on is basically this: In a JBOD oVirt HCI
configuration with multiple disk devices per host node is it possible to
have just one data volume or is it necessary to create multiple data
volumes
:52 AM, Sahina Bose wrote:
>
>
> On Mon, Jul 30, 2018 at 7:09 PM, Jayme wrote:
>
>> Hello,
>>
>> Thanks for the feedback. I don't "need" to create only one data domain,
>> I'm more or less trying to confirm whether or not it is proper to conf
Why would they be setup by default via the cockpit if they are no longer
needee?
On Sat, Jul 28, 2018, 1:13 PM femi adegoke, wrote:
> There is no difference.
>
> I think those names were carried over from previous generations when you
> had to have an ISO domain for storing ISOs.
>
> Now you
but never does.
I still feel like it could be network related in some way just not sure
how. Any ideas?
On Mon, Jul 30, 2018, 2:25 PM Jayme, wrote:
> Latest version of oVirt node 4.2 installed on three hosts. I completed
> successfully the cockpit gdeploy process to deploy HCI
as
> wrong… can you get to the console like that?
>
>
>
> Regards
>
> Bill
>
>
>
>
>
>
>
> *From:* Jayme [mailto:jay...@gmail.com]
> *Sent:* Monday, July 30, 2018 3:38 PM
> *To:* users
> *Subject:* [ovirt-users] Re: Hosted Engine deploy faile
-- Forwarded message -
From: Jayme
Date: Sun, Jul 29, 2018, 10:09 PM
Subject: Re: [ovirt-users] Re: up and running with ovirt 4.2 and gluster
To: Mike
On this same subject one thing I'm currently hung up on re: HCI setup is
the next step in the cockpit config for glusterfs. I
I'm building a 3 host HCI configuration. I have servers configured and
latest version of oVirt node 4.2 installed. I'm preparing to run the
cockpit hosted engine + glusterFS deploy.
Each of my hosts have two SSDs in JBOD /dev/sda and /dev/sdb
Currently these drives have not been touched,
Hello,
I have read it several times and it is helpful but unfortunately it doesn't
discuss using multiple drives. In that guide they are just using one drive
(sdb) in each host, my confusion revolves around how to make use of the
second drive I have in each host.
For example if I left all three
I think we are in a race to see who can get a new HCI setup built faster.
There have been a few hurtles along the way :)
On Tue, Jul 31, 2018, 6:53 AM Bill Dossett, wrote:
> Hmm, resounding silence…. Have attached a screen shot of the host storage
> devices on the host I added after the
PLAY [gluster_servers]
*
TASK [Create VDO with specified size]
**
failed: [HOST] (item={u'disk': u'/dev/sda', u'logicalsize': u'17000G',
u'name': u'vdo_sda'}) => {"changed": false, "err": "usage: vdo
logicalsize=17000G,17000G
blockmapcachesize=128M
readcache=enabled
readcachesize=20M
emulate512=on
writepolicy=auto
ignore_vdo_errors=no
slabsize=32G,32G
On Tue, Jul 31, 2018 at 4:19 PM, Jayme wrote:
> PLAY [gluster_servers] **
> ***
&g
>> start building out my storage today. I had to get down into the OS and
>> work with the logical volume manager commands a bit to clean up some of my
>> mess and also figured out how to blacklist the disks on my 4th and 5th
>> host from multipath…. So that’s encouraging…
ing like
>>remote-viewer vnc://:
>>
>>
>>
>> Which got me in and allowed me to fix the networking once I saw what was
>> wrong… can you get to the console like that?
>>
>>
>>
>> Regards
>>
>> Bill
>>
>>
>>
>&g
Is it possible to apply vdo on an existing HCI build or does it need to be
rebuilt?
On Wed, Aug 1, 2018, 6:45 AM Gobinda Das, wrote:
> The latest gdeploy version is gdeploy-2.0.2-27
>
> On Wed, Aug 1, 2018 at 3:12 PM, Jayme wrote:
>
>> I burned the node image yesterday 4.2.
e gdeploy version?
>
> On Wed, 1 Aug 2018, 12:55 a.m. Jayme, wrote:
>
>> More info from the gdploy config:
>>
>> action=create
>> devices=sda,sdb
>> names=vdo_sda,vdo_sdb
>> logicalsize=17000G,17000G
>> blockmapcachesize=128M
>> readcache
-family: inet
nfs.disable: on
performance.client-io-threads: off
On Fri, Aug 3, 2018 at 6:53 AM, Sahina Bose wrote:
>
>
> On Fri, 3 Aug 2018 at 3:07 PM, Jayme wrote:
>
>> Hello,
>>
>> The option to optimize for virt store is tough to find (in my opinion)
>>
Scratch that, there are actually a couple subtle changes, I did a diff to
compare:
< server.allow-insecure: on
29c27
< network.remote-dio: enable
---
> network.remote-dio: off
On Sat, Aug 4, 2018 at 10:34 AM, Jayme wrote:
> One more interesting thing to note. As a test I ju
eation.
>
>
>
> Thanks again
>
> Bill
>
>
>
>
>
> *From:* Jayme
> *Sent:* Thursday, August 2, 2018 5:56 PM
> *To:* William Dossett
> *Cc:* users
> *Subject:* Re: [ovirt-users] Tuning and testing GlusterFS performance
>
>
>
> Bill,
>
>
&g
on the engine volume as well?
On Sat, Aug 4, 2018 at 10:26 AM, Jayme wrote:
> Interesting that it should have been set by cockpit but seemingly wasn't
> (at least it did not appear so in my case, as setting optimize for virt
> increased performance dramatically). I did indeed use th
volume info now VS what I just
posted in above reply before I made the optimize change all the gluster
options are identical, not one value is changed as far as I can see. What
is the optimize for virt store option in the admin GUI doing exactly?
On Sat, Aug 4, 2018 at 10:29 AM, Jayme wrote
in
addition to regular volume activity
On Sun, Aug 5, 2018, 1:06 PM William Dossett,
wrote:
> I think Percs have queue depth of 31 if that’s of any help… fairly common
> with that level of controller.
>
>
>
> *From:* Jayme
> *Sent:* Sunday, August 5, 2018 9:50 AM
> *To:* Darrel
t, but this is feeling much more like a finished product than
> it used to.
>
>
>
> Regards
>
> Bill
>
>
>
>
>
> *From:* Jayme
> *Sent:* Sunday, August 5, 2018 10:18 AM
> *To:* William Dossett
> *Cc:* Darrell Budic ; users
> *Subject:* Re: [ovirt-u
e are WAY too high and will degrade performance to
> the point of causing problems on decently used volumes during a heal. If
> these are being set by the HCI installer, I’d recommend changing them.
>
>
> --
> *From:* Jayme
> *Subject:* [ovirt-users] Re: T
I recall in the past (on a bit older of an oVirt build) I was able to
launch VM consoles from my web browser. I believe it may have been using
spice html5 at the time, is this still possible? Currently launching
console downloads the typical .vv file.
What is currently the best/easiest method
So I've finally completed my first HCI build using the below configuration:
3x
Dell PowerEdge R720
2x 2.9 GHz 8 Core E5-2690
256GB RAM
2x250gb SSD Raid 1 (boot/os)
2x2TB SSD jbod passthrough (used for gluster bricks)
1Gbe Nic for management 10Gbe nic for Gluster
Using Replica 3 with no arbiter.
I'm deploying three host Ovirt HCI and I'm a bit confused about how to
configure the glusterFS volumes and bricks in the cockpit.
My three hosts are configured the exact same way physically:
2x250GB SSDs in Raid 1 for OS
2 x 2TB SSD in JBOD which I intend to use for Gluster (devices sda and sdb)
I followed this guide to get my three node HCI cluster up and running:
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
The docs were a bit out of date but most things matched up.
I have configured an internal GlusterFS network on a separate subnet on
10Gbe
Recently built a three host HCI with oVirt node 4.2.5. I am seeing the
following error in each hosts syslog often. What does it mean and how can
it be corrected?
vdsm[3470]: ERROR Internal server error#012Traceback (most recent call
last):#012 File
I have a newly built three node HCI glusterFS cluster (deployed with
cockpit) running ovirt 4.2.5
I'm noticing that a few times a day I'm seeing the following event log:
Invalid status on Data Center Default. Setting status to Non Responsive.
In every single case it happens about 10 seconds
I'll check that out and let you know. I am not using vdo, so perhaps it's
possible that the vdo python module does not get installed even though
apparently it's still needed without vdo present
On Wed, Aug 8, 2018, 1:08 AM Sahina Bose, wrote:
>
>
> On Tue, Aug 7, 2018 at 8:23 PM, Jay
I have multiple data volumes and when I import a VM from an NFS attached
export domain the disk always gets created on on specific volume. I
understand I can move the disk from one volume to another after the fact
but is there a way I can specify which volume to use for the initial
import?
s!
On Wed, Aug 8, 2018 at 8:18 AM, Jayme wrote:
> I'll check that out and let you know. I am not using vdo, so perhaps it's
> possible that the vdo python module does not get installed even though
> apparently it's still needed without vdo present
>
> On Wed, Aug 8, 2018, 1:
defaults were not set to optimize for virt. My gluster performance was as
bad as the first time around 5Mb/sec on DD tests. After optimizing volumes
for virt store it increased by 10x. If these settings are suppose to be
applied by default it does not appear to be working as intended..
- Jayme
Actually I just figured out what was causing this. I disabled root logins
in sshd config.
On Mon, Aug 20, 2018 at 2:28 PM Jayme wrote:
> I have a fairly recent three node HCI setup running 4.2.5. I've recently
> updated hosted engine to the latest version (including yum updates). Wh
I have a fairly recent three node HCI setup running 4.2.5. I've recently
updated hosted engine to the latest version (including yum updates). When
I check for host updates through the engine gui I get the following error
for all three of my hosts:
Failed to check for available updates on host
Is there an updated guide for setting up GlusterFS geo-replication? What I
am interested in is having another oVirt setup on a separate server with
glusterFS volume replicated to it. If my primary cluster went down I would
be able to start important VMs on the secondary oVirt build until I'm
Is it expected that choosing georeplication --> new in the Ovirt Gui does
nothing or is that a bug?
On Tue, Aug 28, 2018, 2:02 PM femi adegoke, wrote:
> https://www.youtube.com/watch?v=UH8B7Nek0Nc
> ___
> Users mailing list -- users@ovirt.org
> To
Hello,
That video has good information but unfortunately it's about Site to Site
DR, not GlusterFS georeplication. I'm looking for information regarding
how to configure GlusterFS replication for use as disaster recovery.
On Tue, Aug 28, 2018 at 2:32 PM femi adegoke
wrote:
> That youtube
anything at all to do with what could have happened.
Thanks very much again, I very much appreciate the help!
- Jayme
On Fri, Jan 12, 2018 at 8:44 AM, Simone Tiraboschi <stira...@redhat.com>
wrote:
>
>
> On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak <msi...@redhat.c
You do not need to define the gluster IPS or hostnames during the initial
deployment. You deploy first then you setup the gluster network after.
Serafh for the up and running with Ovirt 4.2 glusterfs guide, it's slightly
dated but goes over how to setup the separate gluster network.
On Tue, Sep
You don't really need a data and vmstore. Vmstore I believe iaeamt to be
the new iso domain but even it is not needed as all data domains act the
same. You can use a seperate data and vmstore domain because it will give
you greater flexibility in terms of backing up thr volumes so you can
choose
at 5:38 PM, Vincent Royer <vinc...@epicenergy.ca> wrote:
> Jayme,
>
> I'm doing a very similar build, the only difference really is I am using
> SSDs instead of HDDs. I have similar questions as you regarding expected
> performance. Have you considered JBOD + NFS? Puttin
I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a budget).
I plan to do 20-30 Linux VMs most of them very light weight + a couple of
heavier hitting web and DB servers with frequent rsync backups. Some have
a lot of small files from large github repos etc.
3X of the following:
ent Royer <vinc...@epicenergy.ca> wrote:
>
>> Jayme,
>>
>> I'm doing a very similar build, the only difference really is I am using
>> SSDs instead of HDDs. I have similar questions as you regarding expected
>> performance. Have you considered JBOD + NFS? P
.
On Thu, Apr 5, 2018 at 2:56 AM, Alex K <rightkickt...@gmail.com> wrote:
> Hi,
>
> You should be ok with the setup.
> I am running around 20 vms (linux and windows, small and medium size) with
> the half of your specs. With 10G network replica 3 is ok.
>
> Alex
>
> On W
I'm very strongly leaning toward an oVirt hyperconverged setup. I plan on
using fairly robust hardware (ssds, lots of ram, 10gb network) as per the
guides, however I am worried about one thing which is glusterFS performance
in regards to its handling of many small files. I've read various forum
I am planning on changing the subnet of our private IP space from
192.168.0.x to something else (due to conflicts with VPN on our network).
What considerations do I need to make in regards to changing the IPs on
oVirt cluster?. The VMs should be fairly easy but what about the hosts,
and hosted
I notice that almost every time I perform an oVirt host upgrade the host is
automatically rebooted. Why is this done and is there a way to disable the
automated reboot process?
___
Users mailing list
Users@ovirt.org
I'm constantly seeing this error in all of my host syslogs:
ovs-vsctl: ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock:
database connection failed (No such file or directory)
My cluster is set to linux bridge and not using OVS. How can I stop the
error message or disable ovs?
I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm
considering storage options. I don't have a requirement for high amounts
of storage, I have a little over 1TB to store but want some overhead so I'm
thinking 2TB of usable space would be sufficient.
I've been doing some
;sab...@redhat.com> wrote:
>
>>
>>
>> On Mon, Mar 19, 2018 at 5:57 PM, Jayme <jay...@gmail.com> wrote:
>>
>>> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm
>>> considering storage options. I don't have a requirement for high
seeing... Storage costs cpu/ram cycles too
>
> On Sat, Oct 20, 2018 at 7:29 PM Donny Davis wrote:
>
>> I am not trying to be sarcastic here, but the host resources are
>> controlled by what you allocate to the vm... that is kinda how
>> virtualization works
>>
>>
I'm wondering how I can best limit the ability of VMs to overrun the load
on hosts. I have a fairly stock 4.2 HCI setup with three well spec'ed
servers, 10Gbe/SSDs, plenty of ram and CPU with only a hand full of light
use VMs. I notice when the occasional demanding job is run on a VM I'm
seeing
Darn autocorrect, sshd config rather
On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski,
wrote:
> Hi,
>
> Please help! :-) I couldn't find any solution via google.
>
> I followed this document to create oVirt hyperconverged on 3 hosts using
> cockpit wizard:
>
>
>
hy the error is so strange to me. I event tested ansible from
>> oVirt host to others and it works ok using ssh keys.
>>
>>
>> W dniu czw., 25.10.2018 o 13:43 Jayme napisał(a):
>>
>>> You should also make sure the host can ssh to itself and accept keys
>>&
It looks to me like a fairly obvious ssh problem. Are the ssh keys setup
for root user and permitrootlogin yes in Asheville config?
On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski,
wrote:
> Hi,
>
> Please help! :-) I couldn't find any solution via google.
>
> I followed this document to
You should also make sure the host can ssh to itself and accept keys
On Thu, Oct 25, 2018, 8:42 AM Jayme, wrote:
> Darn autocorrect, sshd config rather
>
> On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
> jprokopow...@gmail.com> wrote:
>
>> Hi,
>>
>&g
With dell servers for example it's as simple as editing the host and
enabling power management then selecting the appropriate drac version and
entering login details. I'm not familiar with what supermicro uses for
remote management but there is likely an option there to support it
On Mon, Nov 5,
Tony, is there a reason why you wouldn't just do a three more hyper
converged setup with self hosted engine? This is the best option for a
three server setup imo
On Tue, Nov 6, 2018, 8:37 AM Tony Brian Albers Hi guys,
>
> I have 3 machines that I'd like to test oVirt/gluster on.
>
> The idea is
Is it possible to update oVirt HCI environment automatically with ansible?
If so are there any specific instructions or details on the process?
Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
I have a very standard three node HCI setup running the latest version of
oVirt 4.2. I've been having some problems updating hosts, the last update
that was released a few weeks ago would produce installfailed when updating
hosts. I was able to resolve this by rebooting the host first then
I am having the same issue as well attempting to update oVirt node to
latest.
On Wed, Nov 14, 2018 at 11:07 AM Giulio Casella wrote:
> It's due to a update of collectd in epel, but ovirt repos contain also
> collectd-write_http and collectd-disk (still not updated). We have to
> wait for
leaving the others untouched.
could it be because after the first host comes up from a reboot the gluster
healing prevents the other host statuses from being "up" thus ansible skips
over them?
On Wed, Nov 14, 2018 at 11:49 AM Martin Perina wrote:
> Hi Jayme,
>
> you can upgrad
is available I can do some more testing with it.
On Wed, Nov 14, 2018 at 3:16 PM Martin Perina wrote:
>
>
> On Wed, Nov 14, 2018 at 6:11 PM Jayme wrote:
>
>> I've been giving this a try but have been running in to a few issues.
>> Namely, it seems to upgrade the first
It's no longer there I use no VNC option for accessing consoles in a
browser works great
On Thu, Oct 11, 2018, 5:15 AM , wrote:
> Hello!
> Strange, but i have no spice-html5 option in vm console settings.
> https://prnt.sc/l4qz00
> Should i add a spice proxy for this?
>
> Version 4.2.6.4-1.el7
You should be using shared external storage or glusterfs, if gluster you
should have other drives in the server to porvision as gluster bricks
during the hyoerconverged deployment
On Mon, Oct 8, 2018, 8:07 AM Stefano Danzi, wrote:
> Hi! It's the frist time thai I use node.
>
> I installed node
I've been seeing these warnings myself, on 1Gb ovirtmanagement (glusterFS
is 10Gbe backend). I haven't correlated to network graphs yet but I don't
know what would be happening on my management network that would be
exhausting 1Gb network.
On Fri, Aug 31, 2018 at 3:27 AM Florian Schmid wrote:
minik Holler wrote:
>
>> On Tue, 18 Sep 2018 19:10:48 -0300
>> Jayme wrote:
>>
>> > I changed engine and host ips to a totally different subnet. My
>>
>> The way to change the IP addresses of the hosts via oVirt UI is
>> Compute > Hosts >
Upgrading the engine is fairly straight forward. What I do is place the
cluster in global maintenance mode. Then on the engine vm yum update Ovirt
packages then run engine-upgrade. After upgrade I do a general yum update
on engine vm to update other non Ovirt packages
On Thu, Sep 20, 2018,
1 - 100 of 377 matches
Mail list logo