[ovirt-users] CLI for HCI setup

2020-09-02 Thread Michael Thomas
Is there a CLI for setting up a hyperconverged environment with glusterfs? The docs that I've found detail how to do it using the cockpit interface[1], but I'd prefer to use a cli similar to 'hosted-engine --deploy' if it is available. Thanks, --Mike

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas
guessing that it's being accessed by the vdsm user? --Mike On 10/14/20 10:59 AM, Michael Thomas wrote: Hi Benny, You are correct, I tried attaching to a running VM (which failed), then tried booting a new VM using this disk (which also failed). I'll use the workaround in the bug report going

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-13 Thread Michael Thomas
: When you ran `engine-setup` did you enable cinderlib preview (it will not be enabled by default)? It should handle the creation of the database automatically, if you didn't you can enable it by running: `engine-setup --reconfigure-optional-components` On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-13 Thread Michael Thomas
ote: When you ran `engine-setup` did you enable cinderlib preview (it will not be enabled by default)? It should handle the creation of the database automatically, if you didn't you can enable it by running: `engine-setup --reconfigure-optional-components` On Wed, Sep 30, 2020 at 1:58 AM Mich

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas
On 10/15/20 10:19 AM, Jeff Bailey wrote: On 10/15/2020 10:01 AM, Michael Thomas wrote: Getting closer... I recreated the storage domain and added rbd_default_features=3 to ceph.conf.  Now I see the new disk being created with (what I think is) the correct set of features: # rbd info

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas
On 10/15/20 11:27 AM, Jeff Bailey wrote: On 10/15/2020 12:07 PM, Michael Thomas wrote: On 10/15/20 10:19 AM, Jeff Bailey wrote: On 10/15/2020 10:01 AM, Michael Thomas wrote: Getting closer... I recreated the storage domain and added rbd_default_features=3 to ceph.conf.  Now I see the new

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Michael Thomas
On 10/14/20 3:30 AM, Benny Zlotnik wrote: Jeff is right, it's a limitation of kernel rbd, the recommendation is to add `rbd default features = 3` to the configuration. I think there are plans to support rbd-nbd in cinderlib which would allow using additional features, but I'm not aware of

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Michael Thomas
r this, but it is currently > not possible. The options are indeed to either recreate the storage > domain or edit the database table > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8 > > > > > On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas wrote: >>

[ovirt-users] ovirt 4.4 self-hosted deployment questions

2020-08-27 Thread Michael Thomas
I have not been able to find answers to a couple of questions in the self-hosted engine documentation[1]. * When installing a new Enterprise Linux host for ovirt, what are the network requirements? Specifically, am I supposed to set up the ovirtmgmt bridge myself on new hosts, or am I

[ovirt-users] Latest ManagedBlockDevice documentation

2020-09-29 Thread Michael Thomas
I'm looking for the latest documentation for setting up a Managed Block Device storage domain so that I can move some of my VM images to ceph rbd. I found this: https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html ...but it has a big note at the top that it

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-29 Thread Michael Thomas
or.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/ On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas wrote: I'm looking for the latest documentation for setting up a Managed Block Device storage domain so that I can move some of my VM images to ceph rbd. I found this: https://ovirt

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Michael Thomas
handle the creation of the database automatically, if you didn't you can enable it by running: `engine-setup --reconfigure-optional-components` On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas wrote: Hi Benny, Thanks for the confirmation. I've installed openstack-ussuri and ceph Octopus. Then I

[ovirt-users] Re: Node upgrade to 4.4

2020-09-23 Thread Michael Thomas
Not to give you any false hope, but when I recently reinstalled my oVirt 4.4.2 cluster, I left the gluster disks alone and only reformatted the OS disks. Much to my surprise, after running the oVirt HCI wizard on this new installation (using the exact same gluster settings as before), the

[ovirt-users] Re: Upgrade oVirt Host from 4.4.0 to 4.4.2 fails

2020-10-02 Thread Michael Thomas
This is a shot in the dark, but it's possible that your dnf command was running off of cached repo metadata. Try running 'dnf clean metadata' before 'dnf upgrade'. --Mike On 10/2/20 12:38 PM, Erez Zarum wrote: Hey, Bunch of hosts installed from oVirt Node Image, i have upgraded the

[ovirt-users] Re: oVirt 4.4 install fails

2020-05-28 Thread Michael Thomas
On 5/28/20 2:48 PM, Me wrote: Hi All Not sure where to start, but here goes. [...] Issue 2, I use FF 72.0.2 on Linux x64 to connect by https://hostname:9090 to the web interface, but I can't enter login details as the boxes (everything) are disabled There is no warning like "we don't

[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-07 Thread Michael Thomas
On 6/7/20 8:42 AM, Yedidyah Bar David wrote: On Sun, Jun 7, 2020 at 4:07 PM Michael Thomas wrote: On 6/7/20 5:01 AM, Yedidyah Bar David wrote: On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas wrote: After a week of iterations, I finally found the problem. I was setting 'PermitRootLogin

[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-06 Thread Michael Thomas
After a week of iterations, I finally found the problem. I was setting 'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as we do on all of our servers. Instead, PermitRootLogin is set to 'without-password' in a match block to allow root logins only from a

[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-07 Thread Michael Thomas
On 6/7/20 5:01 AM, Yedidyah Bar David wrote: On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas wrote: After a week of iterations, I finally found the problem. I was setting 'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as we do on all of our servers. Instead

[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-08 Thread Michael Thomas
On 6/8/20 12:58 AM, Yedidyah Bar David wrote: On Sun, Jun 7, 2020 at 6:37 PM Michael Thomas wrote: On 6/7/20 8:42 AM, Yedidyah Bar David wrote: On Sun, Jun 7, 2020 at 4:07 PM Michael Thomas wrote: On 6/7/20 5:01 AM, Yedidyah Bar David wrote: On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas

[ovirt-users] Re: oVirt 4.4 node via PXE and custom kickstart

2020-06-04 Thread Michael Thomas
To answer my own question: The 'liveimg' instruction in the kickstart file causes it to ignore any extra repos or packages that may be listed later. The workaround is to either create a new live image to install from, or manually create the repo files and install the packages in the %post

[ovirt-users] oVirt 4.4 node via PXE and custom kickstart

2020-06-04 Thread Michael Thomas
I'm trying to customize a node install to include some local management and monitoring tools, starting with puppet, following the instructions here:

[ovirt-users] Add static route to ovirt nodes

2021-11-19 Thread Michael Thomas
I'm running ovirt 4.4.2 on CentOS 8.2. My ovirt nodes have two network addresses, ovirtmgmt and a second used for normal routed traffic to the cluster and WAN. After the ovirt nodes were set up, I found that I needed to add an extra static route to the cluster interface to allow the hosts to

[ovirt-users] Re: Lost space in /var/log

2022-06-22 Thread Michael Thomas
This can also happen with a misconfigured logrotate config. If a process is writing to a large log file, and logrotate comes along and removes it, then the process still has an open filehandle to the large file even though you can't see it. The space won't get removed until the process

[ovirt-users] onn pv

2024-02-01 Thread Michael Thomas
I think I may have just messed up my cluster. I'm running an older 4.4.2.6 cluster on CentOS-8 with 4 nodes and a self-hosted engine. I wanted to assemble the spare drives on 3 of the 4 nodes into a new gluster volume for extra VM storage. Unfortunately, I did not look closely enough at one