I am trying to eat own dog food. Infrastructure as code...
So.. I have taken three physical servers. Setup all the base packages, set
NICs, NTP, DNS, collected disk UUIDs etc..
The servers are prepped.
And one server has cockpit and ovirt-engine installed... and service
started.
I have a pair of nodes which service DNS / NTP / FTP / AD /Kerberos / IPLB
etc..
ns01, ns02
These two "infrastructure VMs have HA Proxy and pacemaker and I have set to
have "HA" within ovirt and node affinity.
But.. within HAProxy, the nodes use to be able to call the STONITH function
of KVM to
I use to have hand built CentOS+KVM + Gluster.
Moved to HCI oVirt controlled system and so far, been very happy with
stability and quality.
One feature that was working in old "hand build" version was fencing.
I have an old APC master switch AP9606. I use to have and use it for
master switch
I have for many years used gluster because..well. 3 nodes.. and so long as
I can pull a drive out.. I can get my data.. and with three copies.. I have
much higher chance of getting it.
Downsides to gluster: Slower (its my home..meh... and I have SSD to avoid
MTBF issues ) and with VDO.. and thin
rt"
Off to find the next "feature"
:)
On Wed, Sep 30, 2020 at 4:02 PM Jeremey Wise wrote:
>
>
> I found this note from RedHat on bugzilla
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1827033
>
> Seems like this could be my issue
>
> This could be where ha
Derek Atkins wrote:
> HI,
>
> On Wed, September 30, 2020 3:50 pm, Jeremey Wise wrote:
> > As the three servers are Centos8 minimal installs. + oVirt HCI wizard to
> > keep them lean and mean... a couple questions
>
> Note that you run this on the Engine VM, not on a h
I found this note from RedHat on bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1827033
Seems like this could be my issue
This could be where had issue with rebuild of ovirt-engine. And then had
to scrape out the disk files to then import them back in. I then moved the
"old files" into
cal:
>
> /usr/local/sbin/start_vms.py > /var/log/start_vms 2>&1 &
>
> The script is smart enough to wait for the engine to be fully active.
>
> -derek
>
> On Wed, September 30, 2020 3:11 pm, Jeremey Wise wrote:
> > i would like to eventuall
; -derek
>
> On Wed, September 30, 2020 11:21 am, Jeremey Wise wrote:
> > When I have to shut down cluster... ups runs out etc.. I need a sequence
> > set of just a small number of VMs to "autostart"
> >
> > Normally I just use DNS FQND to connect to oVirt engine
Can anyone post link. (with examples.. as most documentation for oVirt
lacks this).. where I can power on a VM via CLI or API.
As of now I cannot login to oVirt-Engine. No errors when I restart it..
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/45KKF5TN5PRQ3R7MDOWIQTSYZXZRVDIZ/
When I have to shut down cluster... ups runs out etc.. I need a sequence
set of just a small number of VMs to "autostart"
Normally I just use DNS FQND to connect to oVirt engine but as two of my
VMs are a DNS HA cluster.. as well as NTP / SMTP /DHCP etc... I need
those two infrastructure VMs
I tried to post on website but .. it did not seem to work... so sorry if
this is double posting.
oVirt login this AM. accepted username and password but got java error.
Restarted oVirt engine
##
hosted-engine --set-maintenance --mode=global
hosted-engine --vm-shutdown
hosted-engine
be working.
On Mon, Sep 28, 2020 at 10:56 PM Jeremey Wise
wrote:
> I used pgadmin Connected to oVirt-engine VM:
>
> username: engine
> password: 'cat /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
> database: engine
>
> Schemas-> Tables -> 153 tables (whi
I used pgadmin Connected to oVirt-engine VM:
username: engine
password: 'cat /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
database: engine
Schemas-> Tables -> 153 tables (which look like what we find in oVirt UI...
Searched around.. no entry where it has 172.16.100.102 or 103 to
status' from all nodes ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
> I posted that I had wiped out the oVirt-engine..
gluster commands stop failing) it keeps
saying it can't do that to a cluster gluster node.
On Sat, Sep 26, 2020 at 1:41 PM Jeremey Wise wrote:
> Another note of color to this.
>
> I can't repair a brick as in gluster it calls bricks by hostname.. and
> oVirt-engine now thinks
on
thorst_penguinpages_local_ brick:
172_16_100_103:/gluster_bricks/vmstore/vmstore does not exist in volume:
vmstore\nPre Validation failed on odinst_penguinpages_local_ brick:
172_16_100_103:/gluster_bricks/vmstore/vmstore does not exist in volume:
vmstore']
On Sat, Sep 26, 2020 at 1:27 PM Jeremey Wise wrote
I posted that I had wiped out the oVirt-engine.. running cleanup on all
three nodes. Done a re-deployment. Then to add nodes back.. though all
have entries for eachother in /etc/hosts and ssh works fine via short and
long name.
I added nodes back into cluster.. but had to do it via IP to get
As expected... this is a learning curve. My three node cluster.. in an
attempt to learn how to do admin work on it, debug it... I have now
redeployed the engine and even added second one on a node in cluster.
But.
I now realize that my "production vms" are gone.
In the past, on a manual
Trying to get all the 3 node cluster back fully working... clearing out all
the errors.
I noted that the HCI wizard.. I thing SHOULD have deployed a hosted engine
on the nodes, but this is not the case .. Only thor... the first node in
cluster has hosted engine.
I tried to redeploy this via the
How ,without reboot of hosting system, do I restart the oVirt engine?
# I tried below but do not seem to effect the virtual machine
[root@thor iso]# systemctl restart ov
ovirt-ha-agent.service ovirt-imageio.service
ovn-controller.service
Trying to repair / clean up HCI deployment so it is HA and ready for
"production".
I have gluster now showing three bricks all green
Now I just have error on node.. and of course the node which is hosting the
ovirt-engine
# (as I can not send images to this forum... I will move to a breadcrumb
I just noticed when HCI setup bult the gluster engine / data / vmstore
volumes... it did use correctly the definition of 10Gb "back end"
interfaces / hosts.
But.. oVirt Engine is NOT referencing this.
it lists bricks as 1Gb "managment / host" interfaces. Is this a GUI
issue? I doubt this and
I saw notes about oVirt 4.4 may no longer support ISO images... but there
are times like now I need to build based on specific ISO images.
I tried to do a cycle to create an image file 8GB then do dd if=blah.iso
of=/
Created a new vm with this as boot disk and it fails to boot... so.. back
to
in oVirt Engine I think I see some of the issue
When you go under volumes -> Data ->
[image: image.png]
It notes two servers.. when you choose "add brick" it says volume has 3
bricks but only two servers.
So I went back to my deployment notes and walked through setup
yum install
use only gluster it could be far easier to
> set:
>
> [root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf
> blacklist {
> devnode "*"
> }
>
>
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir
d with the next one.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
> I did.
>
> Here are all three nodes with restart. I fin
> Have you restarted glusterd.service on the affected node.
> glusterd is just management layer and it won't affect the brick processes.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise <
Well.. to know how to do it with Curl is helpful.. but I think I did
[root@odin ~]# curl -s -k --user admin@internal:blahblah
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep
''
data
hosted_storage
ovirt-image-repository
What I guess I did is
t; Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
>
>
>
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
>
>
>
> Oth
; В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
> oVirt engine shows one of the gluster servers having an issue. I did a
> graceful shutdown of all three nodes over we
oVirt engine shows one of the gluster servers having an issue. I did a
graceful shutdown of all three nodes over weekend as I have to move around
some power connections in prep for UPS.
Came back up.. but
[image: image.png]
And this is reflected in 2 bricks online (should be three for
rver to rebuild
hosting KVM system to import and then with oVirt to LibVirt connection..
slurp vm out.
Plus.. that means anytime someone sends me a tar of qcow2 and xml.. I have
to re-host to export.. :P
On Mon, Sep 21, 2020 at 8:18 AM Nir Soffer wrote:
> On Mon, Sep 21, 2020 at
and get the GUI and setup polished.
My $0.002
On Mon, Sep 21, 2020 at 8:06 AM Nir Soffer wrote:
> On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise
> wrote:
> >
> >
> >
> >
> >
> >
> > vdo: ERROR - Device /dev/sdc excluded by a fil
I rebuilt my lab environment. And their are four or five VMs that really
would help if I did not have to rebuild.
oVirt as I am now finding when it creates infrastructure, sets it out such
that I cannot just use older means of placing .qcow2 files in folders and
.xml files in other folders
[image: image.png]
vdo: ERROR - Device /dev/sdc excluded by a filter
[image: image.png]
Other server
vdo: ERROR - Device
/dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
excluded by a filter.
All systems when I go to create VDO
Deployment on three node cluster using oVirt HCI wizard.
I think this is a bug where it needs to do either a pre-flight name length
validation, or increase valid field length.
I avoid using /dev/sd# as those can change. And the wizard allows for
this change to a more explicit devices Ex:
37 matches
Mail list logo