I think I may have just messed up my cluster.
I'm running an older 4.4.2.6 cluster on CentOS-8 with 4 nodes and a
self-hosted engine. I wanted to assemble the spare drives on 3 of the 4
nodes into a new gluster volume for extra VM storage.
Unfortunately, I did not look closely enough at one o
This can also happen with a misconfigured logrotate config. If a
process is writing to a large log file, and logrotate comes along and
removes it, then the process still has an open filehandle to the large
file even though you can't see it. The space won't get removed until
the process closes
I'm running ovirt 4.4.2 on CentOS 8.2. My ovirt nodes have two network
addresses, ovirtmgmt and a second used for normal routed traffic to the
cluster and WAN.
After the ovirt nodes were set up, I found that I needed to add an extra
static route to the cluster interface to allow the hosts to s
On 10/15/20 11:27 AM, Jeff Bailey wrote:
On 10/15/2020 12:07 PM, Michael Thomas wrote:
On 10/15/20 10:19 AM, Jeff Bailey wrote:
On 10/15/2020 10:01 AM, Michael Thomas wrote:
Getting closer...
I recreated the storage domain and added rbd_default_features=3 to
ceph.conf. Now I see the new
On 10/15/20 10:19 AM, Jeff Bailey wrote:
On 10/15/2020 10:01 AM, Michael Thomas wrote:
Getting closer...
I recreated the storage domain and added rbd_default_features=3 to
ceph.conf. Now I see the new disk being created with (what I think
is) the correct set of features:
# rbd info
/tmp/brickrbd_nwc3kywk
...and I'm guessing that it's being accessed by the vdsm user?
--Mike
On 10/14/20 10:59 AM, Michael Thomas wrote:
Hi Benny,
You are correct, I tried attaching to a running VM (which failed), then
tried booting a new VM using this disk (which also failed). I&
an submit an RFE for this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
>
>
>
>
> On Wed, Oct 14, 2020 at 3:40 PM Michae
On 10/14/20 3:30 AM, Benny Zlotnik wrote:
Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of anythin
flags:
create_timestamp: Tue Oct 13 06:53:55 2020
access_timestamp: Tue Oct 13 06:53:55 2020
modify_timestamp: Tue Oct 13 06:53:55 2020
Where else can I look to see where it's failing?
--Mike
On 9/30/20 2:19 AM, Benny Zlotnik wrote:
When you ran `engine-setu
0 2:19 AM, Benny Zlotnik wrote:
When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`
On Wed, Sep
This is a shot in the dark, but it's possible that your dnf command was
running off of cached repo metadata.
Try running 'dnf clean metadata' before 'dnf upgrade'.
--Mike
On 10/2/20 12:38 PM, Erez Zarum wrote:
Hey,
Bunch of hosts installed from oVirt Node Image, i have upgraded the self-hoste
wrote:
When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`
On Wed, Sep 30, 2020 at 1:58 AM M
os-brick. The rest of
the information is valid.
[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas wrote:
I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I ca
I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.
I found this:
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
...but it has a big note at the top that it
Not to give you any false hope, but when I recently reinstalled my oVirt
4.4.2 cluster, I left the gluster disks alone and only reformatted the
OS disks. Much to my surprise, after running the oVirt HCI wizard on
this new installation (using the exact same gluster settings as before),
the orig
Is there a CLI for setting up a hyperconverged environment with
glusterfs? The docs that I've found detail how to do it using the
cockpit interface[1], but I'd prefer to use a cli similar to
'hosted-engine --deploy' if it is available.
Thanks,
--Mike
[1]https://www.ovirt.org/documentation/gl
I have not been able to find answers to a couple of questions in the
self-hosted engine documentation[1].
* When installing a new Enterprise Linux host for ovirt, what are the
network requirements? Specifically, am I supposed to set up the
ovirtmgmt bridge myself on new hosts, or am I suppose
On 6/8/20 12:58 AM, Yedidyah Bar David wrote:
On Sun, Jun 7, 2020 at 6:37 PM Michael Thomas wrote:
On 6/7/20 8:42 AM, Yedidyah Bar David wrote:
On Sun, Jun 7, 2020 at 4:07 PM Michael Thomas wrote:
On 6/7/20 5:01 AM, Yedidyah Bar David wrote:
On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas
On 6/7/20 8:42 AM, Yedidyah Bar David wrote:
On Sun, Jun 7, 2020 at 4:07 PM Michael Thomas wrote:
On 6/7/20 5:01 AM, Yedidyah Bar David wrote:
On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas wrote:
After a week of iterations, I finally found the problem. I was setting
'PermitRootLog
On 6/7/20 5:01 AM, Yedidyah Bar David wrote:
On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas wrote:
After a week of iterations, I finally found the problem. I was setting
'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as
we do on all of our servers
After a week of iterations, I finally found the problem. I was setting
'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as
we do on all of our servers. Instead, PermitRootLogin is set to
'without-password' in a match block to allow root logins only from a well-known
To answer my own question: The 'liveimg' instruction in the kickstart file
causes it to ignore any extra repos or packages that may be listed later. The
workaround is to either create a new live image to install from, or manually
create the repo files and install the packages in the %post sect
I'm trying to customize a node install to include some local management and
monitoring tools, starting with puppet, following the instructions here:
https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/#Advanced_RHVH_Install_SHE_cockpit_deploy
On 5/28/20 2:48 PM, Me wrote:
Hi All
Not sure where to start, but here goes.
[...]
Issue 2, I use FF 72.0.2 on Linux x64 to connect by
https://hostname:9090 to the web interface, but I can't enter login
details as the boxes (everything) are disabled There is no warning
like "we don't like
24 matches
Mail list logo