[ovirt-users] onn pv

2024-02-01 Thread Michael Thomas

I think I may have just messed up my cluster.

I'm running an older 4.4.2.6 cluster on CentOS-8 with 4 nodes and a 
self-hosted engine. I wanted to assemble the spare drives on 3 of the 4 
nodes into a new gluster volume for extra VM storage.


Unfortunately, I did not look closely enough at one of the nodes before 
running sfdisk+parted+pvcreate, and now it looks like I may have broken 
my onn storage.  pvs shows missing uuids:


# pvs
  WARNING: Couldn't find device with uuid 
RgTaWg-fR1T-J3Nv-uh03-ZTi5-jz9X-cjl1lo

.
  WARNING: Couldn't find device with uuid 
0l9CFI-Z7pP-x1P8-AJ78-gRoz-ql0e-2gzXsC

.
  WARNING: Couldn't find device with uuid 
fl73h2-ztyn-y9NY-4TF4-K2Pd-G2Ow-vH46yH

.
  WARNING: VG onn_ovirt1 is missing PV 
RgTaWg-fR1T-J3Nv-uh03-ZTi5-jz9X-cjl1lo (l

ast written to /dev/nvme0n1p3).
  WARNING: VG onn_ovirt1 is missing PV 
0l9CFI-Z7pP-x1P8-AJ78-gRoz-ql0e-2gzXsC (l

ast written to /dev/nvme1n1p1).
  WARNING: VG onn_ovirt1 is missing PV 
fl73h2-ztyn-y9NY-4TF4-K2Pd-G2Ow-vH46yH (l

ast written to /dev/nvme2n1p1).
  PV VG   Fmt  Attr PSizePFree
  /dev/md2   vg00 lvm2 a--  <928.80g  0
  /dev/nvme2n1p1 gluster_vg_nvme2n1p1 lvm2 a-- 2.91t  0
  /dev/nvme3n1p1 onn_ovirt1   lvm2 a-- 2.91t  0
  [unknown]  onn_ovirt1   lvm2 a-m   929.92g 100.00g
  [unknown]  onn_ovirt1   lvm2 a-m  <931.51g  0
  [unknown]  onn_ovirt1   lvm2 a-m 2.91t  0


Here's what I don't understand:

* This onn volume group only existed on one of the 4 nodes.  I expected 
it would have been on all 4?


* lsblk and /etc/fstab don't show any reference to onn

* What is the ONN volume group used for, and how bad is it if it's now 
missing?  I note that my VMs all continue to run and I've been able to 
migrate them off of this affected node with no apparent problems.


* Is it possible that this onn volume group was already broken before I 
messed with the nvme3n1 disk?  When ovirt was originally installed 
several years ago, I went through the install process multiple times and 
might not have cleaned up properly each time.


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RIF3QB2ECLCEDAYICNNZM6Q2S3DB7ROB/


[ovirt-users] Re: Lost space in /var/log

2022-06-22 Thread Michael Thomas
This can also happen with a misconfigured logrotate config.  If a 
process is writing to a large log file, and logrotate comes along and 
removes it, then the process still has an open filehandle to the large 
file even though you can't see it.  The space won't get removed until 
the process closes the filehandle (eg rebooting).


The following command should give you a list of these ghost files that 
are still open but have been removed from the directory tree:


lsof | grep '(deleted)'

Stackexchange has some additional useful tips:

https://unix.stackexchange.com/questions/68523/find-and-remove-large-files-that-are-open-but-have-been-deleted

--Mike

On 6/22/22 15:49, matthew.st...@fujitsu.com wrote:

Deleted some files to "clean up" /var/log, but the space was not recovered?

Space for deleted files are only recovered when all references to that space 
are removed.   This includes the directory reference, and any open file-handles 
to the file.

The is a popular trick for a self-deleting scratch file.  Create a file, open 
file-handle to it, and then remove the file.   The open file-handle can then be 
written to, and read from, as long as you like, but when the file-handle is 
closed, explicitly or implicitly, the space is automatically recovered.


-Original Message-
From: Andrei Verovski 
Sent: Wednesday, June 22, 2022 3:27 PM
To: users@ovirt.org
Subject: [ovirt-users] Lost space in /var/log

Hi !

I have strange situation with low disk space on /var/log/, yet I can't figure 
out what have consumed so much space.
Look at the du output.

Thanks in advance for any suggestion(s).


# df -h

Filesystem   Size  Used Avail Use% Mounted on
/dev/mapper/centos-var   15G  1.3G   13G   9% /var
/dev/mapper/centos-var_log   7.8G  7.1G  293M  97% /var/log
/dev/mapper/centos-var_log_audit 2.0G   41M  1.8G   3% /var/log/audit

#
# du -ah /var/log | sort -n -r | head -n 20

664K/var/log/vdsm
660K/var/log/vdsm/vdsm.log
624K/var/log/gdm
580K/var/log/anaconda/storage.log
480K/var/log/anaconda/packaging.log
380K/var/log/gdm/:0.log.4
316K/var/log/anaconda/syslog
276K/var/log/tuned
252K/var/log/libvirt/qemu/NextCloud-Collabora-LVM.log-20210806
220K/var/log/Xorg.0.log
168K/var/log/gdm/:0.log
156K/var/log/yum.log-20191117
156K/var/log/secure-20180726
132K/var/log/libvirt/qemu/NextCloud-Collabora-Active.log
128K/var/log/anaconda/anaconda.log
120K/var/log/hp-snmp-agents
116K/var/log/hp-snmp-agents/cma.log
112K/var/log/vinchin
104K/var/log/vinchin/kvm_backup_service
104K/var/log/tuned/tuned.log.2

#
# find . -printf '%s %p\n'| sort -nr | head -10

16311686 ./rhsm/rhsm.log
4070423 ./cron-20180726
3667146 ./anaconda/journal.log
3409071 ./secure
300 ./rhsm/rhsm.log-20180726
2912670 ./audit/audit.log
1418007 ./sanlock.log
1189580 ./vdsm/vdsm.log
592718 ./anaconda/storage.log
487567 ./anaconda/packaging.log
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N566CST3DAXILL77BBB2EYGTFX7ZQ2WY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCAQU32UZSEWKYRICOTC6QJMSC5BUZIW/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKTYSWZ4D3ZLBZ4LYIC4KTUC3NHD6YBS/


[ovirt-users] Add static route to ovirt nodes

2021-11-19 Thread Michael Thomas
I'm running ovirt 4.4.2 on CentOS 8.2. My ovirt nodes have two network 
addresses, ovirtmgmt and a second used for normal routed traffic to the 
cluster and WAN.


After the ovirt nodes were set up, I found that I needed to add an extra 
static route to the cluster interface to allow the hosts to see my ceph 
storage nodes (to make the rbd images visible to the VMs):


10.9.0.0/16 via 10.13.0.1

I can add this route using three different methods:

1) ip route add 10.9.0.0/16 via 10.13.0.1

2) nmcli conn modify enp65s0f0 ipv4.routes "10.9.0.0/16 10.13.0.1"
   nmcli conn down enp65s0f0
   nmcli conn up enp65s0f0

3) vi /etc/sysconfig/network-scripts/route-enp65s0f0
   ifdown enp65s0f0
   ifup enp65s0f0

However, when I reboot the host, the static route goes away.  Methods 2 
and 3 have always given me a persistent static route on other EL8 hosts, 
but not on my ovirt nodes.


What is the correct way to add a persistent static route on an ovirt host?

--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4M7MBYH5ZVGMR3W7U2OMVHE7UQJJBW2/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas

On 10/15/20 11:27 AM, Jeff Bailey wrote:


On 10/15/2020 12:07 PM, Michael Thomas wrote:

On 10/15/20 10:19 AM, Jeff Bailey wrote:


On 10/15/2020 10:01 AM, Michael Thomas wrote:

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think 
is) the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
    size 100 GiB in 25600 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 70aab541cb331
    block_name_prefix: rbd_data.70aab541cb331
    format: 2
    features: layering
    op_features:
    flags:
    create_timestamp: Thu Oct 15 06:53:23 2020
    access_timestamp: Thu Oct 15 06:53:23 2020
    modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time 
it's a permissions issue on the ovirt node where the VM is running. 
It looks like it can't read the temporary ceph config file that is 
sent over from the engine:



Are you using octopus?  If so, the config file that's generated is 
missing the "[global]" at the top and octopus doesn't like that. It's 
been patched upstream.


Yes, I am using Octopus (15.2.4).  Do you have a pointer to the 
upstream patch or issue so that I can watch for a release with the fix?



https://bugs.launchpad.net/cinder/+bug/1865754


And for anyone playing along at home, I was able to map this back to the 
openstack ticket:


https://review.opendev.org/#/c/730376/

It's a simple fix.  I just changed line 100 of 
/usr/lib/python3.6/site-packages/os_brick/initiator/connectors/rbd.py to:


conf_file.writelines(["[global]", "\n", mon_hosts, "\n", keyring, "\n"])


After applying this patch, I was finally able to attach my ceph block 
device to a running VM.  I've now got virtually unlimited data storage 
for my VMs.  Many thanks to you and Benny for the help!


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UET4Q7BDRBWPWSQ4FNZY5XW6S4LJV4KK/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas

On 10/15/20 10:19 AM, Jeff Bailey wrote:


On 10/15/2020 10:01 AM, Michael Thomas wrote:

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think 
is) the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
    size 100 GiB in 25600 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 70aab541cb331
    block_name_prefix: rbd_data.70aab541cb331
    format: 2
    features: layering
    op_features:
    flags:
    create_timestamp: Thu Oct 15 06:53:23 2020
    access_timestamp: Thu Oct 15 06:53:23 2020
    modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time it's 
a permissions issue on the ovirt node where the VM is running.  It 
looks like it can't read the temporary ceph config file that is sent 
over from the engine:



Are you using octopus?  If so, the config file that's generated is 
missing the "[global]" at the top and octopus doesn't like that.  It's 
been patched upstream.


Yes, I am using Octopus (15.2.4).  Do you have a pointer to the upstream 
patch or issue so that I can watch for a release with the fix?


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LBG4EEWFWDLSBTBLYD6NTBQWTBJRPQDK/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think is) 
the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
size 100 GiB in 25600 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 70aab541cb331
block_name_prefix: rbd_data.70aab541cb331
format: 2
features: layering
op_features:
flags:
create_timestamp: Thu Oct 15 06:53:23 2020
access_timestamp: Thu Oct 15 06:53:23 2020
modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time it's a 
permissions issue on the ovirt node where the VM is running.  It looks 
like it can't read the temporary ceph config file that is sent over from 
the engine:


https://pastebin.com/pGjMTvcn

The file '/tmp/brickrbd_nwc3kywk' on the ovirt node is only accessible 
by root:


[root@ovirt4 ~]# ls -l /tmp/brickrbd_nwc3kywk
-rw---. 1 root root 146 Oct 15 07:25 /tmp/brickrbd_nwc3kywk

...and I'm guessing that it's being accessed by the vdsm user?

--Mike

On 10/14/20 10:59 AM, Michael Thomas wrote:

Hi Benny,

You are correct, I tried attaching to a running VM (which failed), then
tried booting a new VM using this disk (which also failed).  I'll use
the workaround in the bug report going forward.

I'll just recreate the storage domain, since at this point I have
nothing in it to lose.

Regards,

--Mike

On 10/14/20 9:32 AM, Benny Zlotnik wrote:

Did you attempt to start a VM with this disk and it failed, or you
didn't try at all? If it's the latter then the error is strange...
If it's the former there is a known issue with multipath at the
moment, see[1] for a workaround, since you might have issues with
detaching volumes which later, because multipath grabs the rbd devices
which would fail `rbd unmap`, it will be fixed soon by automatically
blacklisting rbd in multipath configuration.

Regarding editing, you can submit an RFE for this, but it is currently
not possible. The options are indeed to either recreate the storage
domain or edit the database table


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8




On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:


On 10/14/20 3:30 AM, Benny Zlotnik wrote:

Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of anything concrete.

Additionally, the path for the cinderlib log is
/var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
would appear in the vdsm.log on the relevant host, and would look
something like "RBD image feature set mismatch. You can disable
features unsupported by the kernel with 'rbd feature disable'"


Thanks for the pointer!  Indeed,
/var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
looking for.  In this case, it was a user error entering the RBDDriver
options:


2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
option use_multipath_for_xfer

...it should have been 'use_multipath_for_image_xfer'.

Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
Domains -> Manage Domain', all driver options are unedittable except for
'Name'.

Then I thought that maybe I can't edit the driver options while a disk
still exists, so I tried removing the one disk in this domain.  But even
after multiple attempts, it still fails with:

2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
when trying to run command 'delete_volume': (psycopg2.IntegrityError)
update or delete on table "volumes" violates foreign key constraint
"volume_attachment_volume_id_fkey" on table "volume_attachment"
DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
referenced from table "volume_attachment".

See https://pastebin.com/KwN1Vzsp for the full log entries related to
this removal.

It's not lying, the volume no longer exists in the rbd pool, but the
cinder database still thinks it's attached, even though I was never able
to get it to attach to a VM.

What are my options for cleaning up this stale disk in the cinder database?

How can I update the driver options in my storage domain (deleting and
recreating the domain is acceptable, if possible)?

--Mike




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Michael Thomas
Hi Benny,

You are correct, I tried attaching to a running VM (which failed), then
tried booting a new VM using this disk (which also failed).  I'll use
the workaround in the bug report going forward.

I'll just recreate the storage domain, since at this point I have
nothing in it to lose.

Regards,

--Mike

On 10/14/20 9:32 AM, Benny Zlotnik wrote:
> Did you attempt to start a VM with this disk and it failed, or you
> didn't try at all? If it's the latter then the error is strange...
> If it's the former there is a known issue with multipath at the
> moment, see[1] for a workaround, since you might have issues with
> detaching volumes which later, because multipath grabs the rbd devices
> which would fail `rbd unmap`, it will be fixed soon by automatically
> blacklisting rbd in multipath configuration.
> 
> Regarding editing, you can submit an RFE for this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
> 
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
> 
> 
> 
> 
> On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
>>
>> On 10/14/20 3:30 AM, Benny Zlotnik wrote:
>>> Jeff is right, it's a limitation of kernel rbd, the recommendation is
>>> to add `rbd default features = 3` to the configuration. I think there
>>> are plans to support rbd-nbd in cinderlib which would allow using
>>> additional features, but I'm not aware of anything concrete.
>>>
>>> Additionally, the path for the cinderlib log is
>>> /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
>>> would appear in the vdsm.log on the relevant host, and would look
>>> something like "RBD image feature set mismatch. You can disable
>>> features unsupported by the kernel with 'rbd feature disable'"
>>
>> Thanks for the pointer!  Indeed,
>> /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
>> looking for.  In this case, it was a user error entering the RBDDriver
>> options:
>>
>>
>> 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
>> option use_multipath_for_xfer
>>
>> ...it should have been 'use_multipath_for_image_xfer'.
>>
>> Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
>> Domains -> Manage Domain', all driver options are unedittable except for
>> 'Name'.
>>
>> Then I thought that maybe I can't edit the driver options while a disk
>> still exists, so I tried removing the one disk in this domain.  But even
>> after multiple attempts, it still fails with:
>>
>> 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
>> volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
>> 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
>> when trying to run command 'delete_volume': (psycopg2.IntegrityError)
>> update or delete on table "volumes" violates foreign key constraint
>> "volume_attachment_volume_id_fkey" on table "volume_attachment"
>> DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
>> referenced from table "volume_attachment".
>>
>> See https://pastebin.com/KwN1Vzsp for the full log entries related to
>> this removal.
>>
>> It's not lying, the volume no longer exists in the rbd pool, but the
>> cinder database still thinks it's attached, even though I was never able
>> to get it to attach to a VM.
>>
>> What are my options for cleaning up this stale disk in the cinder database?
>>
>> How can I update the driver options in my storage domain (deleting and
>> recreating the domain is acceptable, if possible)?
>>
>> --Mike
>>
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WIVWLKS347QKA2GMIGF4ZEMLFBJQ7SU/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Michael Thomas

On 10/14/20 3:30 AM, Benny Zlotnik wrote:

Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of anything concrete.

Additionally, the path for the cinderlib log is
/var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
would appear in the vdsm.log on the relevant host, and would look
something like "RBD image feature set mismatch. You can disable
features unsupported by the kernel with 'rbd feature disable'"


Thanks for the pointer!  Indeed, 
/var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was 
looking for.  In this case, it was a user error entering the RBDDriver 
options:



2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config 
option use_multipath_for_xfer


...it should have been 'use_multipath_for_image_xfer'.

Now my attempts to fix it are failing...  If I go to 'Storage -> Storage 
Domains -> Manage Domain', all driver options are unedittable except for 
'Name'.


Then I thought that maybe I can't edit the driver options while a disk 
still exists, so I tried removing the one disk in this domain.  But even 
after multiple attempts, it still fails with:


2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume 
volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred 
when trying to run command 'delete_volume': (psycopg2.IntegrityError) 
update or delete on table "volumes" violates foreign key constraint 
"volume_attachment_volume_id_fkey" on table "volume_attachment"
DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still 
referenced from table "volume_attachment".


See https://pastebin.com/KwN1Vzsp for the full log entries related to 
this removal.


It's not lying, the volume no longer exists in the rbd pool, but the 
cinder database still thinks it's attached, even though I was never able 
to get it to attach to a VM.


What are my options for cleaning up this stale disk in the cinder database?

How can I update the driver options in my storage domain (deleting and 
recreating the domain is acceptable, if possible)?


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XTULDZ4DON6E4KMXQ5NVZQIZTRK4CZPQ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-13 Thread Michael Thomas
To verify that it's not a cephx permission issue, I tried accessing the 
block storage from both the engine and the ovirt node using the 
credentials I set up in the ManagedBlockStorage setup page:


[root@ovirt4]# rbd --id ovirt ls rbd.ovirt.data
volume-5419640e-445f-4b3f-a29d-b316ad031b7a
[root@ovirt4]# rbd --id ovirt info 
rbd.ovirt.data/volume-5419640e-445f-4b3f-a29d-b316ad031b7a

rbd image 'volume-5419640e-445f-4b3f-a29d-b316ad031b7a':
size 100 GiB in 25600 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 68a7cd6aeb3924
block_name_prefix: rbd_data.68a7cd6aeb3924
format: 2
features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten

op_features:
flags:
create_timestamp: Tue Oct 13 06:53:55 2020
access_timestamp: Tue Oct 13 06:53:55 2020
modify_timestamp: Tue Oct 13 06:53:55 2020

Where else can I look to see where it's failing?

--Mike

On 9/30/20 2:19 AM, Benny Zlotnik wrote:

When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`


On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:


Hi Benny,

Thanks for the confirmation.  I've installed openstack-ussuri and ceph
Octopus.  Then I tried using these instructions, as well as the deep
dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.

I've done this a couple of times, and each time the engine fails when I
try to add the new managed block storage domain.  The error on the
screen indicates that it can't connect to the cinder database.  The
error in the engine log is:

2020-09-29 17:02:11,859-05 WARN
[org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
(default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
action 'AddManagedBlockStorageDomain' failed for user
admin@internal-authz. Reasons:
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED

I had created the db on the engine with this command:

su - postgres -c "psql -d template1 -c \"create database cinder owner
engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
lc_ctype 'en_US.UTF-8';\""

...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:

  hostcinder  engine  ::0/0   md5
  hostcinder  engine  0.0.0.0/0   md5

Is there anywhere else I should look to find out what may have gone wrong?

--Mike

On 9/29/20 3:34 PM, Benny Zlotnik wrote:

The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:


I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.

I found this:

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."

The oVirt administration guide[1] does not talk about managed block devices.

I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.

Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?

--Mike
[1]ovirt.org/documentation/administration_guide/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/







___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/a

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-13 Thread Michael Thomas

Setting both http_proxy and https_proxy fixed the issue.

Thanks for the tip!

--Mike


I am not sure, it's been a long time since I tried that.

Feel free to file a bug.

You can also try setting env var 'http_proxy' for engine-setup, e.g.:

http_proxy=MY_PROXY_URL engine-setup --reconfigure-optional-components

Alternatively, you can also add '--offline' to engine-setup cmd, and then it
won't do any package management (not try to update, check for updates, etc.).

Best regards,

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1392312



--Mike


On 9/30/20 2:19 AM, Benny Zlotnik wrote:

When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`


On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:


Hi Benny,

Thanks for the confirmation.  I've installed openstack-ussuri and ceph
Octopus.  Then I tried using these instructions, as well as the deep
dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.

I've done this a couple of times, and each time the engine fails when I
try to add the new managed block storage domain.  The error on the
screen indicates that it can't connect to the cinder database.  The
error in the engine log is:

2020-09-29 17:02:11,859-05 WARN
[org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
(default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
action 'AddManagedBlockStorageDomain' failed for user
admin@internal-authz. Reasons:
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED

I had created the db on the engine with this command:

su - postgres -c "psql -d template1 -c \"create database cinder owner
engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
lc_ctype 'en_US.UTF-8';\""

...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:

   hostcinder  engine  ::0/0   md5
   hostcinder  engine  0.0.0.0/0   md5

Is there anywhere else I should look to find out what may have gone wrong?

--Mike

On 9/29/20 3:34 PM, Benny Zlotnik wrote:

The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:


I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.

I found this:

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."

The oVirt administration guide[1] does not talk about managed block devices.

I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.

Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?

--Mike
[1]ovirt.org/documentation/administration_guide/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/














___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KT3QQZZESXOTSQFBZZXYDH5WNZVKMJZ/


[ovirt-users] Re: Upgrade oVirt Host from 4.4.0 to 4.4.2 fails

2020-10-02 Thread Michael Thomas
This is a shot in the dark, but it's possible that your dnf command was 
running off of cached repo metadata.


Try running 'dnf clean metadata' before 'dnf upgrade'.

--Mike

On 10/2/20 12:38 PM, Erez Zarum wrote:

Hey,
Bunch of hosts installed from oVirt Node Image, i have upgraded the self-hosted 
Engine successfully.
I have ran Check Upgrade on one of the hosts and it was entitled for an upgrade.
I use the UI to let it Upgrade, after multiple retries it always fails on "Prepare 
NGN host for upgrade."
so i chose another host as a test.
I have set the Host into Maintenance and let all the VMs migrate successfully.
made sure i do have the latest 4.4.2 repo (it was 4.4.0) yum install 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
And issued "dnf upgrade"
Installing:
  ovirt-openvswitch
  replacing  openvswitch.x86_64 2.11.1-5.el8
  ovirt-openvswitch-ovn
  replacing  ovn.x86_64 2.11.1-5.el8
  ovirt-openvswitch-ovn-host
  replacing  ovn-host.x86_64 2.11.1-5.el8
  ovirt-python-openvswitch
  replacing  python3-openvswitch.x86_64 2.11.1-5.el8
Upgrading:
  ovirt-node-ng-image-update-placeholder
Installing dependencies:
  openvswitch2.11
  ovirt-openvswitch-ovn-common
  ovn2.11
  ovn2.11-host
  python3-openvswitch2.11
Installing weak dependencies:
  network-scripts-openvswitch
  network-scripts-openvswitch2.11

It was very quick, but nothing else happened, I did try to reboot the host but 
I still see the host as oVirt 4.4.0 and as expected it still says that an 
update is available.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WL67T6DNFNS3QOZ2ZHK75JXTCWHIECFD/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I6N6JLKNECFFP5TRSYYAQWLTUTLJKAF7/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-30 Thread Michael Thomas
I hadn't installed the necessary packages when the engine was first 
installed.


However, running 'engine-setup --reconfigure-optional-components' 
doesn't work at the moment because (by design) my engine does not have a 
network route outside of the cluster.  It fails with:


[ INFO  ] DNF Errors during downloading metadata for repository 'AppStream':
   - Curl error (7): Couldn't connect to server for 
http://mirrorlist.centos.org/?release=8=x86_64=AppStream=$infra 
[Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]
[ ERROR ] DNF Failed to download metadata for repo 'AppStream': Cannot 
prepare internal mirrorlist: Curl error (7): Couldn't connect to server 
for 
http://mirrorlist.centos.org/?release=8=x86_64=AppStream=$infra 
[Failed to connect to mirrorlist.centos.org port 80: Network is unreachable]



I have a proxy set in the engine's /etc/dnf/dnf.conf, but it doesn't 
seem to be obeyed when running engine-setup.  Is there another way that 
I can get engine-setup to use a proxy?


--Mike


On 9/30/20 2:19 AM, Benny Zlotnik wrote:

When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`


On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:


Hi Benny,

Thanks for the confirmation.  I've installed openstack-ussuri and ceph
Octopus.  Then I tried using these instructions, as well as the deep
dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.

I've done this a couple of times, and each time the engine fails when I
try to add the new managed block storage domain.  The error on the
screen indicates that it can't connect to the cinder database.  The
error in the engine log is:

2020-09-29 17:02:11,859-05 WARN
[org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
(default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
action 'AddManagedBlockStorageDomain' failed for user
admin@internal-authz. Reasons:
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED

I had created the db on the engine with this command:

su - postgres -c "psql -d template1 -c \"create database cinder owner
engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
lc_ctype 'en_US.UTF-8';\""

...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:

  hostcinder  engine  ::0/0   md5
  hostcinder  engine  0.0.0.0/0   md5

Is there anywhere else I should look to find out what may have gone wrong?

--Mike

On 9/29/20 3:34 PM, Benny Zlotnik wrote:

The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:


I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.

I found this:

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."

The oVirt administration guide[1] does not talk about managed block devices.

I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.

Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?

--Mike
[1]ovirt.org/documentation/administration_guide/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/







___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-09-29 Thread Michael Thomas

Hi Benny,

Thanks for the confirmation.  I've installed openstack-ussuri and ceph 
Octopus.  Then I tried using these instructions, as well as the deep 
dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.


I've done this a couple of times, and each time the engine fails when I 
try to add the new managed block storage domain.  The error on the 
screen indicates that it can't connect to the cinder database.  The 
error in the engine log is:


2020-09-29 17:02:11,859-05 WARN 
[org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand] 
(default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of 
action 'AddManagedBlockStorageDomain' failed for user 
admin@internal-authz. Reasons: 
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED


I had created the db on the engine with this command:

su - postgres -c "psql -d template1 -c \"create database cinder owner 
engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' 
lc_ctype 'en_US.UTF-8';\""


...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:

hostcinder  engine  ::0/0   md5
hostcinder  engine  0.0.0.0/0   md5

Is there anywhere else I should look to find out what may have gone wrong?

--Mike

On 9/29/20 3:34 PM, Benny Zlotnik wrote:

The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:


I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.

I found this:

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."

The oVirt administration guide[1] does not talk about managed block devices.

I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.

Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?

--Mike
[1]ovirt.org/documentation/administration_guide/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WUYG5H2T4ODBS3YCOTNHJMUBCKMFMATI/


[ovirt-users] Latest ManagedBlockDevice documentation

2020-09-29 Thread Michael Thomas
I'm looking for the latest documentation for setting up a Managed Block 
Device storage domain so that I can move some of my VM images to ceph rbd.


I found this:

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

...but it has a big note at the top that it is "...not user 
documentation and should not be treated as such."


The oVirt administration guide[1] does not talk about managed block devices.

I've found a few mailing list threads that discuss people setting up a 
Managed Block Device with ceph, but didn't see any links to 
documentation steps that folks were following.


Is the Managed Block Storage domain a supported feature in oVirt 4.4.2, 
and if so, where is the documentation for using it?


--Mike
[1]ovirt.org/documentation/administration_guide/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/


[ovirt-users] Re: Node upgrade to 4.4

2020-09-23 Thread Michael Thomas
Not to give you any false hope, but when I recently reinstalled my oVirt 
4.4.2 cluster, I left the gluster disks alone and only reformatted the 
OS disks.  Much to my surprise, after running the oVirt HCI wizard on 
this new installation (using the exact same gluster settings as before), 
the original contents of my gluster-based data domain were still intact.


I certainly wouldn't count on this behavior with any important data, though.

--Mike

On 9/23/20 1:40 PM, Vincent Royer wrote:

well that sounds like a risky nightmare. I appreciate your help.

*Vincent Royer*
*778-825-1057*



*SUSTAINABLE MOBILE ENERGY SOLUTIONS*





On Wed, Sep 23, 2020 at 11:31 AM Strahil Nikolov 
wrote:


Before you reinstall the node , you should use 'gluster volume
remove-brick  replica  ovirt_node:/path-to-brick' to
reduce the volume to replica 2 (for example). Then you need to 'gluster
peer detach ovirt_node' in order to fully cleanup the gluster TSP.

You will have to remove the bricks that are on that < ovirt_node > before
detaching it.

Once you reinstall with EL 8, you can 'gluster peer probe
' and then 'gluster volume add-brick  replica
 reinstalled_ovirt_node:/path-to-brick.

Note that reusing bricks is not very easy, so just wipe the data via
'mkfs.xfs -i size=512 /dev/block/device'.

Once all volumes are again a replica 3 , just wait for the healing to go
over and you can proceed with the oVirt part.

Best Regards,
Strahil Nikolov






В сряда, 23 септември 2020 г., 20:45:30 Гринуич+3, Vincent Royer <
vinc...@epicenergy.ca> написа:





My confusion is that those documents do not describe any gluster related
tasks for Ovirt Nodes.  When I take a node down and install Ovirt Node 4.4
on it, won't all the gluster bricks on that node be lost?  The part
describing "preserving local storage", that isn't anything about Gluster,
correct?


Vincent Royer
778-825-1057


SUSTAINABLE MOBILE ENERGY SOLUTIONS





On Tue, Sep 22, 2020 at 8:31 PM Ritesh Chikatwar 
wrote:

Vincent,


This document will be useful


https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_to_4-4_4-3_SHE


On Wed, Sep 23, 2020, 3:55 AM Vincent Royer 

wrote:

I have 3 nodes running node ng 4.3.9 with a gluster/hci cluster.  How

do I upgrade to 4.4?  Is there a guide?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:

https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/TCX2RUE5RN7RNB45UWBXZ4SKH6KT7ZFC/





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J6IERH7OAO6JJ423A3K2KU2R25YXU2NF/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NXLHNX2ABBGAAJZXVRDJODX3H2WF7BGR/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FVZMDQQPQ3PIEHSSDTF52VY5U7337RUM/


[ovirt-users] CLI for HCI setup

2020-09-02 Thread Michael Thomas
Is there a CLI for setting up a hyperconverged environment with 
glusterfs?  The docs that I've found detail how to do it using the 
cockpit interface[1], but I'd prefer to use a cli similar to 
'hosted-engine --deploy' if it is available.


Thanks,

--Mike
[1]https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBSWDQXASO7PT5ZTWCH34DXSPJAQ3DMO/


[ovirt-users] ovirt 4.4 self-hosted deployment questions

2020-08-27 Thread Michael Thomas
I have not been able to find answers to a couple of questions in the 
self-hosted engine documentation[1].


* When installing a new Enterprise Linux host for ovirt, what are the 
network requirements?  Specifically, am I supposed to set up the 
ovirtmgmt bridge myself on new hosts, or am I supposed to let that be 
handled by the engine when I add the new host to the engine?


* In the 'New Host' dialog on the engine management page, does the 
Hostname/IP that I enter have to be the host's name on the ovirtmgmt 
LAN?  If so, then it seems to me that I need to configure the ovirtmgmt 
bridge myself on new hosts.


* Does the engine need to be able to route outside of the cluster (eg to 
the WAN), or is it allowed to restrict the engine's routing to the local 
cluster?


* In the 'New Host' dialog on the engine management page, what is the 
meaning of 'Choose hosted engine deployment action'?  From the way it is 
phrased, it sounds like this will create a second engine in my cluster, 
which doesn't make sense.  Or does this mean that the new host will be 
able to run the Engine VM in a HA manner?



In my current test deployment I have 3 subnets in my cluster.  Network 
WAN is the WAN.  Network CLUSTER is for communication between cluster 
compute nodes, storage servers, and management servers.  Network OVIRT 
is for ovirt management and VM migration between hosts.


My first self-hosted engine host is connected to networks CLUSTER and 
OVIRT.  The engine VM is only connected to network OVIRT through a 
bridge on the host, but has a gateway that lets it route traffic to 
network CLUSTER (but not network WAN).


Is this an appropriate network setup for ovirt, or should there be no 
distinction between the CLUSTER and OVIRT networks?


--Mike


[1]https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6ZQUYBF2IL7XN3ORTBZRRBQANVAJ6427/


[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-08 Thread Michael Thomas

On 6/8/20 12:58 AM, Yedidyah Bar David wrote:

On Sun, Jun 7, 2020 at 6:37 PM Michael Thomas  wrote:


On 6/7/20 8:42 AM, Yedidyah Bar David wrote:

On Sun, Jun 7, 2020 at 4:07 PM Michael Thomas  wrote:


On 6/7/20 5:01 AM, Yedidyah Bar David wrote:

On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas  wrote:


After a week of iterations, I finally found the problem.  I was setting 
'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as 
we do on all of our servers.  Instead, PermitRootLogin is set to 
'without-password' in a match block to allow root logins only from a well-known 
set of hosts.


I understand that you meant to say that this is already working for
you, right? That you set it to allow without-password from some
addresses and that that was enough. If so:


Correct.  Once I added the engine's IP to the Match block allowing root
logins, it worked again.



Thanks for the report!



Can someone explain why setting 'PermitRootLogin no' in the sshd_config on the 
hypervisor OS would affect the hosted engine deployment?


Because the engine (running inside a VM) uses ssh as root to connect
to the host (in which the engine vm is running).


Would it be sufficient to set, on the host, 'PermitRootLogin
without-password' in a Match block that matches the ovirt management
network?

Match Address 10.10.10.0/24
   PermitRootLogin without-password

?


Do you mean here to ask if 10.10.10.10/24 is enough?

The engine VM's IP address should be enough. What this address is,
after deploy finishes, is of course up to you. During deploy it's by
default in libvirt's default network, 192.168.222.0/24, but can be
different if that's already in use by something else (e.g. a physical
NIC).

BTW, I didn't test this myself. I do see in the code that it's
supposed to work. If you find a bug, please report one. Thanks.


I think the two problems that I ran into were:

* Lack of documentation about the requirement that the engine (whether
self-hosted or standalone) be able to ssh into the bare metal hypervisor
host over the ovirt management network using ssh keys.


I agree it's not detailed enough.

We have it briefly mentioned e.g. here:

https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/#host-firewall-requirements_SHE_cockpit_deploy

For some reason it's marked "Optional", not sure why.



* No clear error message in the logs describing why this was failing.
The only errors I got were a timeout waiting for the host to be up, and
a generic ""The system may not be provisioned according to the playbook
results: please check the logs for the issue, fix accordingly or
re-deploy from scratch.\n"

I'll file this as a documentation bug.


Very well.



Filed:

https://bugzilla.redhat.com/show_bug.cgi?id=1845271

--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GTFDN6N7BNCRVJKC4QVHXNCX57F3GFC6/


[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-07 Thread Michael Thomas

On 6/7/20 8:42 AM, Yedidyah Bar David wrote:

On Sun, Jun 7, 2020 at 4:07 PM Michael Thomas  wrote:


On 6/7/20 5:01 AM, Yedidyah Bar David wrote:

On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas  wrote:


After a week of iterations, I finally found the problem.  I was setting 
'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as 
we do on all of our servers.  Instead, PermitRootLogin is set to 
'without-password' in a match block to allow root logins only from a well-known 
set of hosts.


I understand that you meant to say that this is already working for
you, right? That you set it to allow without-password from some
addresses and that that was enough. If so:


Correct.  Once I added the engine's IP to the Match block allowing root 
logins, it worked again.




Thanks for the report!



Can someone explain why setting 'PermitRootLogin no' in the sshd_config on the 
hypervisor OS would affect the hosted engine deployment?


Because the engine (running inside a VM) uses ssh as root to connect
to the host (in which the engine vm is running).


Would it be sufficient to set, on the host, 'PermitRootLogin
without-password' in a Match block that matches the ovirt management
network?

Match Address 10.10.10.0/24
  PermitRootLogin without-password

?


Do you mean here to ask if 10.10.10.10/24 is enough?

The engine VM's IP address should be enough. What this address is,
after deploy finishes, is of course up to you. During deploy it's by
default in libvirt's default network, 192.168.222.0/24, but can be
different if that's already in use by something else (e.g. a physical
NIC).

BTW, I didn't test this myself. I do see in the code that it's
supposed to work. If you find a bug, please report one. Thanks.


I think the two problems that I ran into were:

* Lack of documentation about the requirement that the engine (whether 
self-hosted or standalone) be able to ssh into the bare metal hypervisor 
host over the ovirt management network using ssh keys.


* No clear error message in the logs describing why this was failing. 
The only errors I got were a timeout waiting for the host to be up, and 
a generic ""The system may not be provisioned according to the playbook 
results: please check the logs for the issue, fix accordingly or 
re-deploy from scratch.\n"


I'll file this as a documentation bug.

--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKHPSYUEY3EMIVICM2O3M6KFN7AUOOEZ/


[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-07 Thread Michael Thomas

On 6/7/20 5:01 AM, Yedidyah Bar David wrote:

On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas  wrote:


After a week of iterations, I finally found the problem.  I was setting 
'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as 
we do on all of our servers.  Instead, PermitRootLogin is set to 
'without-password' in a match block to allow root logins only from a well-known 
set of hosts.


Thanks for the report!



Can someone explain why setting 'PermitRootLogin no' in the sshd_config on the 
hypervisor OS would affect the hosted engine deployment?


Because the engine (running inside a VM) uses ssh as root to connect
to the host (in which the engine vm is running).


Would it be sufficient to set, on the host, 'PermitRootLogin 
without-password' in a Match block that matches the ovirt management 
network?


Match Address 10.10.10.0/24
PermitRootLogin without-password

?

--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/35TSUAZ35YB3LCB3QM2CL6VG2KG4IHNF/


[ovirt-users] Re: First ovirt 4.4 installation failing

2020-06-06 Thread Michael Thomas
After a week of iterations, I finally found the problem.  I was setting 
'PermitRootLogin no' in the global section of the bare metal OS sshd_config, as 
we do on all of our servers.  Instead, PermitRootLogin is set to 
'without-password' in a match block to allow root logins only from a well-known 
set of hosts.

Can someone explain why setting 'PermitRootLogin no' in the sshd_config on the 
hypervisor OS would affect the hosted engine deployment?

--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IM2O4JP4H2SVHYNQELTPIJIXMXPIXRJY/


[ovirt-users] Re: oVirt 4.4 node via PXE and custom kickstart

2020-06-04 Thread Michael Thomas
To answer my own question:  The 'liveimg' instruction in the kickstart file 
causes it to ignore any extra repos or packages that may be listed later.  The 
workaround is to either create a new live image to install from, or manually 
create the repo files and install the packages in the %post section.

The documentation above is not very clear about this.  It reads "The %packages 
section is not required for oVirt Node. ", whereas the following is a little 
more clear about the restriction:

"The %packages section is ignored when using the liveimg option."

--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2ZVMMBS35RFPW5TNWUEG3SKQVIRERK66/


[ovirt-users] oVirt 4.4 node via PXE and custom kickstart

2020-06-04 Thread Michael Thomas
I'm trying to customize a node install to include some local management and 
monitoring tools, starting with puppet, following the instructions here: 
 
https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/#Advanced_RHVH_Install_SHE_cockpit_deploy

The installation of the ovirt node works, and I can deploy the engine once it's 
running.  However, while the extra steps in the %post section of my kickstart 
are working, any additional 'repo' and %package settings seem to get ignored 
(even though they were copied from known working kickstart files).

What kickstart customizations are supported when deploying a node via PXE?

My PXE menu looks like this:

menuentry 'Ovirt node' {
  linuxefi ovirt/vmlinuz ip=dhcp ks=http://10.13.5.13/kickstart/ovirt.cfg 
ksdevice=link initrd=ovirt/initrd.img inst.stage2=http://10.13.5.13/rhvh
  initrdefi ovirt/initrd.img
}

My kickstart file is as follows:

liveimg --url=http://10.13.5.13/rhvh/ovirt-node-ng-image.squashfs.img
clearpart --all --initlabel
autopart --type=thinp
zerombr
rootpw --plaintext ovirt
timezone --utc America/Chicago
text
reboot

# This will need to be updated to point to the 'frozen' snapshot, when available
repo --install --name="EPEL" 
--baseurl=http://10.13.5.13/mirror/linux/epel/8/Everything/x86_64 --cost=99
repo --install --name="Puppet" 
--baseurl=http://10.13.5.13/mirror/linux/puppetlabs/puppet6/el/8/x86_64 
--cost=98

%packages
puppet-agent
%end

%post --erroronfail
nodectl init
echo "This is a test" > /etc/test.txt
%end

--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JWCDKP224UMTLRK6BNDMMTRJJ4PQAYCE/


[ovirt-users] Re: oVirt 4.4 install fails

2020-05-28 Thread Michael Thomas

On 5/28/20 2:48 PM, Me wrote:

Hi All

Not sure where to start, but here goes.


[...]

Issue 2, I use FF 72.0.2 on Linux x64 to connect by
https://hostname:9090 to the web interface, but I can't enter login
details as the boxes (everything) are disabled There is no warning
like "we don't like your choice of browser", but the screen is a not
very accessible dark grey on darker grey (a poor choice in what I
thought were more enlightened times) so this maybe the case. I have
disabled all security add-ons in FF, makes no difference.


I ran into this one today as well.  I found that the mouse would not 
work on any of the text boxes or buttons using FF, but I could use  
to navigate through the screen and enter the username/password.


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HLM3C4HJAJ66F4CGT7PKKPYFLBEWYGLF/