[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Gilboa Davara
Hello Nir,

Thanks for the prompt answer.

On Wed, Oct 14, 2020 at 1:02 PM Nir Soffer  wrote:

>
> GlusterFS?
>

Yep, GlusterFS. Sorry, wrong abbreviation on my end..


>
>
> This will not be fast as local device passed-through to the vm
>
It will also be problematic, since all hosts will mount, monitor, and
> maintain leases on this NFS storage, since it is considered as shared
> storage.
> If another host fail to access this NFS storage the other host will be
> deactivated and all the VMs will migrate to other hosts. This migration
> storm can cause lot of trouble.
> In the worst case, in no other host can access this NFS storage all other
> hosts will be deactivated.
>
This is same as NFS (internally this is the same code). It will work only
> if you can mount the same device/export on all hosts. This is even worse
> than NFS.
>

OK. Understood. No NFS / POSIXFS storage than.

>
>
>> A. Am I barking at the wrong tree here? Is this setup even possible?
>>
>
> This is possible using host device.
> You can attach a host device to a VM. This will pin the VM to the host,
> and give best performance.
> It may not be flexible enough since you need to attach entire device.
> Maybe it can work with LVM logical volumes.
>

I've got room to spare.
Any documentation on how to achieve this (or some pointers where to look)?
I couldn't find LVM / block device under host devices / storage domain /
etc and Google search returned irrelevant results.

- Gilboa


> Nir
>
> B. If it is even possible, any documentation / pointers on setting up
>> per-host private storage?
>>
>> I should mention that these workstations are quite beefy (64-128GB
>> RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
>> can even split the local storage and GFS to different arrays).
>>
>> - Gilboa
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFX5Q6SA4DPL2TDXWXKWUD7FQJJDGURM/


[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-14 Thread C Williams
Strahil,

Please clarify what are the disk groups that you are referring to?

Also,
Regarding your statement  "In JBOD mode, Red Hat support only 'replica 3'
volumes." does this also mean "replica 3" variants ex.
"distributed-replicate"

Thank You For Your Help !

On Wed, Oct 14, 2020 at 7:34 AM C Williams  wrote:

> Thanks Strahil !
>
> More questions may follow.
>
> Thanks Again For Your Help !
>
> On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov 
> wrote:
>
>> Imagine you got a host with 60 Spinning Disks -> I would recommend you to
>> split it to 10/12 disk groups and these groups will represent several
>> bricks (6/5).
>>
>> Keep in mind that when you start using many (some articles state hundreds
>> , but no exact number was given) bricks , you should consider brick
>> multiplexing (cluster.brick-multiplex).
>>
>> So, you can use as many bricks you want , but each brick requires cpu
>> time (separate thread) , tcp port number and memory.
>>
>> In my setup I use multiple bricks in order to spread the load via LACP
>> over several small (1GBE) NICs.
>>
>>
>> The only "limitation" is to have your data on separate hosts , so when
>> you create the volume it is extremely advisable that you follow this model:
>>
>> hostA:/path/to/brick
>> hostB:/path/to/brick
>> hostC:/path/to/brick
>> hostA:/path/to/brick2
>> hostB:/path/to/brick2
>> hostC:/path/to/brick2
>>
>> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep
>> that in mind.
>>
>> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning
>> disks should be in a raid of some type (maybe RAID10 for perf).
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <
>> cwilliams3...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hello,
>>
>> I am getting some questions from others on my team.
>>
>> I have some hosts that could provide up to 6 JBOD disks for oVirt data
>> (not arbiter) bricks
>>
>> Would this be workable / advisable ?  I'm under the impression there
>> should not be more than 1 data brick per HCI host .
>>
>> Please correct me if I'm wrong.
>>
>> Thank You For Your Help !
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4HUGVEPTXY2LGTGMO656K4KADCXRLND/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Nir Soffer
On Wed, Oct 14, 2020 at 1:29 PM Strahil Nikolov via Users
 wrote:
>
> Hi Gilboa,
>
> I think that storage domains need to be accessible from all nodes in the 
> cluster - and as yours will be using local storage and yet be in a 2-node 
> cluster that will be hard.
>
> My guess is that you can try the following cheat:
>
>  Create a single brick gluster volume and do some modifications:
> - volume type 'replica 1'
> - cluster.choose-local should be set to yes , once you apply the virt group 
> of settings (cause it's setting it to no)
> - Set your VMs in such way that they don't failover

This has the same issues of "shared" NFS domain served
by one host.

> Of course, creation of new VMs will happen from the host with the SPM flag, 
> but the good thing is that you can change the host with that flag. So if your 
> gluster has a brick ovirt1:/local_brick/brick , you can set the host ovirt1 
> to 'SPM' and then create your VM.
>
> Of course the above is just pure speculation as I picked my setup to be 
> 'replica 3 arbiter 1' and trade storage for live migration.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara 
>  написа:
>
>
>
>
>
> Hello all,
>
> I'm thinking about converting a couple of old dual Xeon V2
> workstations into (yet another) oVirt setup.
> However, the use case for this cluster is somewhat different:
> While I do want most of the VMs to be highly available (Via 2+1 GFS
> storage domain), I'd also want pin at least one "desktop" VM to each
> host (possibly with vGPU) and let this VM access the local storage
> directly in-order to get near bare metal performance.
>
> Now, I am aware that I can simply share an LVM LV over NFS / localhost
> and pin a specific VM to each specific host, and the performance will
> be acceptable, I seem to remember that there's a POSIX-FS storage
> domain that at least in theory should be able to give me per-host
> private storage.
>
> A. Am I barking at the wrong tree here? Is this setup even possible?
> B. If it is even possible, any documentation / pointers on setting up
> per-host private storage?
>
> I should mention that these workstations are quite beefy (64-128GB
> RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
> can even split the local storage and GFS to different arrays).
>
> - Gilboa
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJ44CLBZ3BIVVDRONWS5NGIZ2RXGXKP7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CKJXR42GAFCHZNU77RSXXAFPHDSFQMUB/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Michael Thomas
Hi Benny,

You are correct, I tried attaching to a running VM (which failed), then
tried booting a new VM using this disk (which also failed).  I'll use
the workaround in the bug report going forward.

I'll just recreate the storage domain, since at this point I have
nothing in it to lose.

Regards,

--Mike

On 10/14/20 9:32 AM, Benny Zlotnik wrote:
> Did you attempt to start a VM with this disk and it failed, or you
> didn't try at all? If it's the latter then the error is strange...
> If it's the former there is a known issue with multipath at the
> moment, see[1] for a workaround, since you might have issues with
> detaching volumes which later, because multipath grabs the rbd devices
> which would fail `rbd unmap`, it will be fixed soon by automatically
> blacklisting rbd in multipath configuration.
> 
> Regarding editing, you can submit an RFE for this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
> 
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
> 
> 
> 
> 
> On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
>>
>> On 10/14/20 3:30 AM, Benny Zlotnik wrote:
>>> Jeff is right, it's a limitation of kernel rbd, the recommendation is
>>> to add `rbd default features = 3` to the configuration. I think there
>>> are plans to support rbd-nbd in cinderlib which would allow using
>>> additional features, but I'm not aware of anything concrete.
>>>
>>> Additionally, the path for the cinderlib log is
>>> /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
>>> would appear in the vdsm.log on the relevant host, and would look
>>> something like "RBD image feature set mismatch. You can disable
>>> features unsupported by the kernel with 'rbd feature disable'"
>>
>> Thanks for the pointer!  Indeed,
>> /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
>> looking for.  In this case, it was a user error entering the RBDDriver
>> options:
>>
>>
>> 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
>> option use_multipath_for_xfer
>>
>> ...it should have been 'use_multipath_for_image_xfer'.
>>
>> Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
>> Domains -> Manage Domain', all driver options are unedittable except for
>> 'Name'.
>>
>> Then I thought that maybe I can't edit the driver options while a disk
>> still exists, so I tried removing the one disk in this domain.  But even
>> after multiple attempts, it still fails with:
>>
>> 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
>> volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
>> 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
>> when trying to run command 'delete_volume': (psycopg2.IntegrityError)
>> update or delete on table "volumes" violates foreign key constraint
>> "volume_attachment_volume_id_fkey" on table "volume_attachment"
>> DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
>> referenced from table "volume_attachment".
>>
>> See https://pastebin.com/KwN1Vzsp for the full log entries related to
>> this removal.
>>
>> It's not lying, the volume no longer exists in the rbd pool, but the
>> cinder database still thinks it's attached, even though I was never able
>> to get it to attach to a VM.
>>
>> What are my options for cleaning up this stale disk in the cinder database?
>>
>> How can I update the driver options in my storage domain (deleting and
>> recreating the domain is acceptable, if possible)?
>>
>> --Mike
>>
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WIVWLKS347QKA2GMIGF4ZEMLFBJQ7SU/


[ovirt-users] Ovirt Node 4.4.2 install Odroid-H2 64GB eMMC

2020-10-14 Thread franklin . shearer
Hi All,

I'm trying to install node 4.4.2 on an eMMC card, but when I get to the storage 
configuration of the installer, it doesn't save the settings (which is 
automatic configuration) I have chosen and displays failed to save storage 
configuration. I have deleted all partitions on the card before trying to 
install and I still get the same error. The only way I can get it to go is 
select manual configuration with LVM thin provisioning and automatically 
create. Am I doing something wrong. I can install Centos 8 no issues on this, 
but not oVirt node 4.4.2.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XYCGL3TLQKKAUXXO7LRCSBK6QLCDJTRE/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
sorry, accidentally hit send prematurely, the database table is
driver_options, the options are json under driver_options

On Wed, Oct 14, 2020 at 5:32 PM Benny Zlotnik  wrote:
>
> Did you attempt to start a VM with this disk and it failed, or you
> didn't try at all? If it's the latter then the error is strange...
> If it's the former there is a known issue with multipath at the
> moment, see[1] for a workaround, since you might have issues with
> detaching volumes which later, because multipath grabs the rbd devices
> which would fail `rbd unmap`, it will be fixed soon by automatically
> blacklisting rbd in multipath configuration.
>
> Regarding editing, you can submit an RFE for this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
>
>
>
>
> On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
> >
> > On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > > to add `rbd default features = 3` to the configuration. I think there
> > > are plans to support rbd-nbd in cinderlib which would allow using
> > > additional features, but I'm not aware of anything concrete.
> > >
> > > Additionally, the path for the cinderlib log is
> > > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > > would appear in the vdsm.log on the relevant host, and would look
> > > something like "RBD image feature set mismatch. You can disable
> > > features unsupported by the kernel with 'rbd feature disable'"
> >
> > Thanks for the pointer!  Indeed,
> > /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> > looking for.  In this case, it was a user error entering the RBDDriver
> > options:
> >
> >
> > 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> > option use_multipath_for_xfer
> >
> > ...it should have been 'use_multipath_for_image_xfer'.
> >
> > Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> > Domains -> Manage Domain', all driver options are unedittable except for
> > 'Name'.
> >
> > Then I thought that maybe I can't edit the driver options while a disk
> > still exists, so I tried removing the one disk in this domain.  But even
> > after multiple attempts, it still fails with:
> >
> > 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> > volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> > 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> > when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> > update or delete on table "volumes" violates foreign key constraint
> > "volume_attachment_volume_id_fkey" on table "volume_attachment"
> > DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> > referenced from table "volume_attachment".
> >
> > See https://pastebin.com/KwN1Vzsp for the full log entries related to
> > this removal.
> >
> > It's not lying, the volume no longer exists in the rbd pool, but the
> > cinder database still thinks it's attached, even though I was never able
> > to get it to attach to a VM.
> >
> > What are my options for cleaning up this stale disk in the cinder database?
> >
> > How can I update the driver options in my storage domain (deleting and
> > recreating the domain is acceptable, if possible)?
> >
> > --Mike
> >
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BQPXZYCE5GWKSHDN5FU7I5L4VP75QPEJ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
Did you attempt to start a VM with this disk and it failed, or you
didn't try at all? If it's the latter then the error is strange...
If it's the former there is a known issue with multipath at the
moment, see[1] for a workaround, since you might have issues with
detaching volumes which later, because multipath grabs the rbd devices
which would fail `rbd unmap`, it will be fixed soon by automatically
blacklisting rbd in multipath configuration.

Regarding editing, you can submit an RFE for this, but it is currently
not possible. The options are indeed to either recreate the storage
domain or edit the database table


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8




On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:
>
> On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > Jeff is right, it's a limitation of kernel rbd, the recommendation is
> > to add `rbd default features = 3` to the configuration. I think there
> > are plans to support rbd-nbd in cinderlib which would allow using
> > additional features, but I'm not aware of anything concrete.
> >
> > Additionally, the path for the cinderlib log is
> > /var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
> > would appear in the vdsm.log on the relevant host, and would look
> > something like "RBD image feature set mismatch. You can disable
> > features unsupported by the kernel with 'rbd feature disable'"
>
> Thanks for the pointer!  Indeed,
> /var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
> looking for.  In this case, it was a user error entering the RBDDriver
> options:
>
>
> 2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
> option use_multipath_for_xfer
>
> ...it should have been 'use_multipath_for_image_xfer'.
>
> Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
> Domains -> Manage Domain', all driver options are unedittable except for
> 'Name'.
>
> Then I thought that maybe I can't edit the driver options while a disk
> still exists, so I tried removing the one disk in this domain.  But even
> after multiple attempts, it still fails with:
>
> 2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
> 2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
> when trying to run command 'delete_volume': (psycopg2.IntegrityError)
> update or delete on table "volumes" violates foreign key constraint
> "volume_attachment_volume_id_fkey" on table "volume_attachment"
> DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
> referenced from table "volume_attachment".
>
> See https://pastebin.com/KwN1Vzsp for the full log entries related to
> this removal.
>
> It's not lying, the volume no longer exists in the rbd pool, but the
> cinder database still thinks it's attached, even though I was never able
> to get it to attach to a VM.
>
> What are my options for cleaning up this stale disk in the cinder database?
>
> How can I update the driver options in my storage domain (deleting and
> recreating the domain is acceptable, if possible)?
>
> --Mike
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q5IC4SDS5AS64RIOKHBFNQDWCOBKKDJW/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Michael Thomas

On 10/14/20 3:30 AM, Benny Zlotnik wrote:

Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of anything concrete.

Additionally, the path for the cinderlib log is
/var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
would appear in the vdsm.log on the relevant host, and would look
something like "RBD image feature set mismatch. You can disable
features unsupported by the kernel with 'rbd feature disable'"


Thanks for the pointer!  Indeed, 
/var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was 
looking for.  In this case, it was a user error entering the RBDDriver 
options:



2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config 
option use_multipath_for_xfer


...it should have been 'use_multipath_for_image_xfer'.

Now my attempts to fix it are failing...  If I go to 'Storage -> Storage 
Domains -> Manage Domain', all driver options are unedittable except for 
'Name'.


Then I thought that maybe I can't edit the driver options while a disk 
still exists, so I tried removing the one disk in this domain.  But even 
after multiple attempts, it still fails with:


2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume 
volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred 
when trying to run command 'delete_volume': (psycopg2.IntegrityError) 
update or delete on table "volumes" violates foreign key constraint 
"volume_attachment_volume_id_fkey" on table "volume_attachment"
DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still 
referenced from table "volume_attachment".


See https://pastebin.com/KwN1Vzsp for the full log entries related to 
this removal.


It's not lying, the volume no longer exists in the rbd pool, but the 
cinder database still thinks it's attached, even though I was never able 
to get it to attach to a VM.


What are my options for cleaning up this stale disk in the cinder database?

How can I update the driver options in my storage domain (deleting and 
recreating the domain is acceptable, if possible)?


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XTULDZ4DON6E4KMXQ5NVZQIZTRK4CZPQ/


[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-14 Thread C Williams
Thanks Strahil !

More questions may follow.

Thanks Again For Your Help !

On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov 
wrote:

> Imagine you got a host with 60 Spinning Disks -> I would recommend you to
> split it to 10/12 disk groups and these groups will represent several
> bricks (6/5).
>
> Keep in mind that when you start using many (some articles state hundreds
> , but no exact number was given) bricks , you should consider brick
> multiplexing (cluster.brick-multiplex).
>
> So, you can use as many bricks you want , but each brick requires cpu time
> (separate thread) , tcp port number and memory.
>
> In my setup I use multiple bricks in order to spread the load via LACP
> over several small (1GBE) NICs.
>
>
> The only "limitation" is to have your data on separate hosts , so when you
> create the volume it is extremely advisable that you follow this model:
>
> hostA:/path/to/brick
> hostB:/path/to/brick
> hostC:/path/to/brick
> hostA:/path/to/brick2
> hostB:/path/to/brick2
> hostC:/path/to/brick2
>
> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that
> in mind.
>
> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks
> should be in a raid of some type (maybe RAID10 for perf).
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
>
>
>
>
>
> Hello,
>
> I am getting some questions from others on my team.
>
> I have some hosts that could provide up to 6 JBOD disks for oVirt data
> (not arbiter) bricks
>
> Would this be workable / advisable ?  I'm under the impression there
> should not be more than 1 data brick per HCI host .
>
> Please correct me if I'm wrong.
>
> Thank You For Your Help !
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XA2CKVS7OMFJEKOXXCZB4UTXVZMEL7TB/


[ovirt-users] Re: ovirt question

2020-10-14 Thread Tony Brian Albers
Hi,

Read the docs at: https://ovirt.org/documentation/

For installation, in my experience, the most stable solution is to use a 
dedicated VM or physical host for running the oVirt engine(management 
system):
https://ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/

Read the instructions carefully all the way through before you start.

Have fun!

/tony

On 10/13/20 11:32 AM, 陳星宇 wrote:
> Dear Sir:
> 
> Can I ask you questions about ovirt?
> 
> How should this system be installed?
> 
> Louis Chen 陳星宇
> 
> Tel : +886-37-586-896  Ext.1337
> Fax : +886-37-587-699
> 
> 描述: 描述: 描述: 02
> louissy_c...@phison.com 
> www.phison.com http://www.phison.com/>
> 
> 群聯電子股份有限公司
> Phison Electronics Corps.
> 苗栗縣350竹南鎮群義路1號
> 
> 
> 
> This message and any attachments are confidential and may be legally 
> privileged. Any unauthorized review, use or distribution by anyone other 
> than the intended recipient is strictly prohibited. If you are not the 
> intended recipient, please immediately notify the sender, completely 
> delete the message and any attachments, and destroy all copies. Your 
> cooperation will be highly appreciated.
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QWXPE76KMIMXPOMFP5P37FVO3MOZ4IJ/
> 


-- 
Tony Albers - Systems Architect - IT Development Royal Danish Library, 
Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXTURJNUYIURAZYM3YNMZZV3TUVSOH7S/


[ovirt-users] Re: ovirt question

2020-10-14 Thread Strahil Nikolov via Users
Hi,

I would start with 
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/
 .
It might have some issues as 4.4 is quite fresh and dynamic, but you just need 
to ping the community for help over the e-mail.

Best Regards,
Strahil Nikolov






В сряда, 14 октомври 2020 г., 10:02:13 Гринуич+3, 陳星宇  
написа: 





  


Dear Sir:

 

Can I ask you questions about ovirt?

 

How should this system be installed?

Louis Chen 陳星宇

Tel : +886-37-586-896  Ext.1337Fax : +886-37-587-699

louissy_c...@phison.comwww.phison.com

群聯電子股份有限公司
Phison Electronics Corps.
苗栗縣350竹南鎮群義路1號

 



This message and any attachments are confidential and may be legally 
privileged. Any unauthorized review, use or distribution by anyone other than 
the intended recipient is strictly prohibited. If you are not the intended 
recipient, please immediately notify the sender, completely delete the message 
and any attachments, and destroy all copies. Your cooperation will be highly 
appreciated.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QWXPE76KMIMXPOMFP5P37FVO3MOZ4IJ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2UC5RHJFXCI6AJSVH3F4V5LYFU2C2LYL/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Strahil Nikolov via Users
Hi Gilboa,

I think that storage domains need to be accessible from all nodes in the 
cluster - and as yours will be using local storage and yet be in a 2-node 
cluster that will be hard.

My guess is that you can try the following cheat:

 Create a single brick gluster volume and do some modifications:
- volume type 'replica 1'
- cluster.choose-local should be set to yes , once you apply the virt group of 
settings (cause it's setting it to no)
- Set your VMs in such way that they don't failover

Of course, creation of new VMs will happen from the host with the SPM flag, but 
the good thing is that you can change the host with that flag. So if your 
gluster has a brick ovirt1:/local_brick/brick , you can set the host ovirt1 to 
'SPM' and then create your VM.

Of course the above is just pure speculation as I picked my setup to be 
'replica 3 arbiter 1' and trade storage for live migration.

Best Regards,
Strahil Nikolov






В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara 
 написа: 





Hello all,

I'm thinking about converting a couple of old dual Xeon V2
workstations into (yet another) oVirt setup.
However, the use case for this cluster is somewhat different:
While I do want most of the VMs to be highly available (Via 2+1 GFS
storage domain), I'd also want pin at least one "desktop" VM to each
host (possibly with vGPU) and let this VM access the local storage
directly in-order to get near bare metal performance.

Now, I am aware that I can simply share an LVM LV over NFS / localhost
and pin a specific VM to each specific host, and the performance will
be acceptable, I seem to remember that there's a POSIX-FS storage
domain that at least in theory should be able to give me per-host
private storage.

A. Am I barking at the wrong tree here? Is this setup even possible?
B. If it is even possible, any documentation / pointers on setting up
per-host private storage?

I should mention that these workstations are quite beefy (64-128GB
RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
can even split the local storage and GFS to different arrays).

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJ44CLBZ3BIVVDRONWS5NGIZ2RXGXKP7/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-14 Thread Strahil Nikolov via Users
strict-o-direct just allows the app to define if direct I/O is needed and yes, 
that could be a reason for your data loss.

The good thing is that the feature is part of the virt group and there is a 
"Optimize for Virt" button somewhere in the UI . Yet, I prefer the manual 
approach of building gluster volumes ,as UI's primary focus is oVirt (quite 
natural , right).


Best Regards,
Strahil Nikolov 






В сряда, 14 октомври 2020 г., 12:30:42 Гринуич+3, Jarosław Prokopowski 
 написа: 





Thanks. I will get rid of multipath.

I did not set performance.strict-o-direct specifically, only changed 
permissions of the volume to vdsm.kvm and applied the virt gourp.

Now is see performance.strict-o-direct was off. Could it be the reason of the 
data loss?
Direct I/O is enabled in oVirt by gluster mount option "-o 
direct-io-mode=enable" right?

Below is full list of the volume options.


Option                                  Value                                  
--                                  -                                  
cluster.lookup-unhashed                on                                      
cluster.lookup-optimize                on                                      
cluster.min-free-disk                  10%                                    
cluster.min-free-inodes                5%                                      
cluster.rebalance-stats                off                                    
cluster.subvols-per-directory          (null)                                  
cluster.readdir-optimize                off                                    
cluster.rsync-hash-regex                (null)                                  
cluster.extra-hash-regex                (null)                                  
cluster.dht-xattr-name                  trusted.glusterfs.dht                  
cluster.randomize-hash-range-by-gfid    off                                    
cluster.rebal-throttle                  normal                                  
cluster.lock-migration                  off                                    
cluster.force-migration                off                                    
cluster.local-volume-name              (null)                                  
cluster.weighted-rebalance              on                                      
cluster.switch-pattern                  (null)                                  
cluster.entry-change-log                on                                      
cluster.read-subvolume                  (null)                                  
cluster.read-subvolume-index            -1                                      
cluster.read-hash-mode                  1                                      
cluster.background-self-heal-count      8                                      
cluster.metadata-self-heal              off                                    
cluster.data-self-heal                  off                                    
cluster.entry-self-heal                off                                    
cluster.self-heal-daemon                on                                      
cluster.heal-timeout                    600                                    
cluster.self-heal-window-size          1                                      
cluster.data-change-log                on                                      
cluster.metadata-change-log            on                                      
cluster.data-self-heal-algorithm        full                                    
cluster.eager-lock                      enable                                  
disperse.eager-lock                    on                                      
disperse.other-eager-lock              on                                      
disperse.eager-lock-timeout            1                                      
disperse.other-eager-lock-timeout      1                                      
cluster.quorum-type                    auto                                    
cluster.quorum-count                    (null)                                  
cluster.choose-local                    off                                    
cluster.self-heal-readdir-size          1KB                                    
cluster.post-op-delay-secs              1                                      
cluster.ensure-durability              on                                      
cluster.consistent-metadata            no                                      
cluster.heal-wait-queue-length          128                                    
cluster.favorite-child-policy          none                                    
cluster.full-lock                      yes                                    
diagnostics.latency-measurement        off                                    
diagnostics.dump-fd-stats              off                                    
diagnostics.count-fop-hits              off                                    

[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Nir Soffer
On Wed, Oct 14, 2020, 12:42 Gilboa Davara  wrote:

> Hello all,
>
> I'm thinking about converting a couple of old dual Xeon V2
> workstations into (yet another) oVirt setup.
> However, the use case for this cluster is somewhat different:
> While I do want most of the VMs to be highly available (Via 2+1 GFS
>

GlusterFS?

storage domain), I'd also want pin at least one "desktop" VM to each
> host (possibly with vGPU) and let this VM access the local storage
> directly in-order to get near bare metal performance.
>

This makes sense.

Now, I am aware that I can simply share an LVM LV over NFS / localhost
> and pin a specific VM to each specific host, and the performance will
> be acceptable,


This will not be fast as local device passed-through to the vm

It will also be problematic, since all hosts will mount, monitor, and
maintain leases on this NFS storage, since it is considered as shared
storage.

If another host fail to access this NFS storage the other host will be
deactivated and all the VMs will migrate to other hosts. This migration
storm can cause lot of trouble.

In the worst case, in no other host can access this NFS storage all other
hosts will be deactivated.


I seem to remember that there's a POSIX-FS storage
> domain that at least in theory should be able to give me per-host
> private storage.
>

This is same as NFS (internally this is the same code). It will work only
if you can mount the same device/export on all hosts. This is even worse
than NFS.


> A. Am I barking at the wrong tree here? Is this setup even possible?
>

This is possible using host device.

You can attach a host device to a VM. This will pin the VM to the host, and
give best performance.

It may not be flexible enough since you need to attach entire device. Maybe
it can work with LVM logical volumes.

Nir

B. If it is even possible, any documentation / pointers on setting up
> per-host private storage?
>
> I should mention that these workstations are quite beefy (64-128GB
> RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
> can even split the local storage and GFS to different arrays).
>
> - Gilboa
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7S7KA474CGP7AC342TLFNN62WH2DLH6N/


[ovirt-users] Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Gilboa Davara
Hello all,

I'm thinking about converting a couple of old dual Xeon V2
workstations into (yet another) oVirt setup.
However, the use case for this cluster is somewhat different:
While I do want most of the VMs to be highly available (Via 2+1 GFS
storage domain), I'd also want pin at least one "desktop" VM to each
host (possibly with vGPU) and let this VM access the local storage
directly in-order to get near bare metal performance.

Now, I am aware that I can simply share an LVM LV over NFS / localhost
and pin a specific VM to each specific host, and the performance will
be acceptable, I seem to remember that there's a POSIX-FS storage
domain that at least in theory should be able to give me per-host
private storage.

A. Am I barking at the wrong tree here? Is this setup even possible?
B. If it is even possible, any documentation / pointers on setting up
per-host private storage?

I should mention that these workstations are quite beefy (64-128GB
RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
can even split the local storage and GFS to different arrays).

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-14 Thread Jarosław Prokopowski
Thanks. I will get rid of multipath.

I did not set performance.strict-o-direct specifically, only changed 
permissions of the volume to vdsm.kvm and applied the virt gourp.

Now is see performance.strict-o-direct was off. Could it be the reason of the 
data loss?
Direct I/O is enabled in oVirt by gluster mount option "-o 
direct-io-mode=enable" right?

Below is full list of the volume options.


Option  Value   
--  -   
cluster.lookup-unhashed on  
cluster.lookup-optimize on  
cluster.min-free-disk   10% 
cluster.min-free-inodes 5%  
cluster.rebalance-stats off 
cluster.subvols-per-directory   (null)  
cluster.readdir-optimizeoff 
cluster.rsync-hash-regex(null)  
cluster.extra-hash-regex(null)  
cluster.dht-xattr-name  trusted.glusterfs.dht   
cluster.randomize-hash-range-by-gfidoff 
cluster.rebal-throttle  normal  
cluster.lock-migration  off 
cluster.force-migration off 
cluster.local-volume-name   (null)  
cluster.weighted-rebalance  on  
cluster.switch-pattern  (null)  
cluster.entry-change-logon  
cluster.read-subvolume  (null)  
cluster.read-subvolume-index-1  
cluster.read-hash-mode  1   
cluster.background-self-heal-count  8   
cluster.metadata-self-heal  off 
cluster.data-self-heal  off 
cluster.entry-self-heal off 
cluster.self-heal-daemonon  
cluster.heal-timeout600 
cluster.self-heal-window-size   1   
cluster.data-change-log on  
cluster.metadata-change-log on  
cluster.data-self-heal-algorithmfull
cluster.eager-lock  enable  
disperse.eager-lock on  
disperse.other-eager-lock   on  
disperse.eager-lock-timeout 1   
disperse.other-eager-lock-timeout   1   
cluster.quorum-type auto
cluster.quorum-count(null)  
cluster.choose-localoff 
cluster.self-heal-readdir-size  1KB 
cluster.post-op-delay-secs  1   
cluster.ensure-durability   on  
cluster.consistent-metadata no  
cluster.heal-wait-queue-length  128 
cluster.favorite-child-policy   none
cluster.full-lock   yes 
diagnostics.latency-measurement off 
diagnostics.dump-fd-stats   off 
diagnostics.count-fop-hits  off 
diagnostics.brick-log-level INFO
diagnostics.client-log-levelINFO
diagnostics.brick-sys-log-level CRITICAL
diagnostics.client-sys-log-levelCRITICAL
diagnostics.brick-logger(null)  
diagnostics.client-logger   (null) 

[ovirt-users] Re: Maximum domains per data center

2020-10-14 Thread Eyal Shenitzky
Hi Tommaso,

As it says in the document, the maximum number of storage domains per
data-center is 50.

On Tue, 13 Oct 2020 at 17:54, Tommaso - Shellrent via Users 
wrote:

> Hi to all.
>
> Can someone confirm to me the value of max domains per data center on
> ovirt 4.4 ?
>
> We found only this for RHEV: https://access.redhat.com/articles/906543
>
> Regards,
> Tommaso.
> --
> --
> [image: Shellrent - Il primo hosting italiano Security First]
> *Tommaso De Marchi*
> *COO - Chief Operating Officer*
> Shellrent Srl
> Via dell'Edilizia, 19 - 36100 Vicenza
> Tel. 0444321155 <+390444321155> | Fax 04441492177
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQFGCDRW2FF7EJRD77OZ6AEN4VXRWLIN/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RUYMYY4CUKWQCY3Y432CJJTJ4IGRAU6J/


[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-14 Thread Gilboa Davara
On Mon, Oct 12, 2020 at 2:41 PM Edward Berger  wrote:
>
> As an ovirt user my first reaction reading your message was
> "that is a ridiculously small system to be trying ovirt self hosted engine 
> on."
>
> My minimum recommendation is 48GB of RAM dual xeon, since the hosted 
> ovirt-engine installation by default
> wants 16GB/4vCPU.  I would use a basic KVM/virt-manager install there instead.
>
> You'll have to provide logs to get more help, but I think you're trying to do 
> the wrong thing given the hardware spec.
>
>

Edward,

(OT answer)
Actually, oVirt is far more versatile than you seem to think.
I've managed to successfully install oVirt on anything down from a 4
core "desktop" Xeon E3 with 16GB RAM and 4 x 1TB HDD MDRAID with both
localhost NFS and GFS (which is being used to test updates) up to
multiple-node GlusterFS clusters and with >1TB RAM per node and
way-too-many cores.
Granted, when using a desktop machine one must be _very_ careful when
configuring the hosted engine, but once you get it installed, it's
quite resilient.

Oh, and when trying to deploy the hosted engine on a low-end machine,
I'd strongly advise you disable cockpit (to conserve RAM), and use the
console version per [1].

- Gilboa


[1] 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/self-hosted_engine_guide/deploying_the_self-hosted_engine_using_the_cli_deploy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D77VL7TMR3LJ6Q3RBIK4NYMOPHQGNCTG/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-14 Thread Benny Zlotnik
Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of anything concrete.

Additionally, the path for the cinderlib log is
/var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
would appear in the vdsm.log on the relevant host, and would look
something like "RBD image feature set mismatch. You can disable
features unsupported by the kernel with 'rbd feature disable'"


On Wed, Oct 14, 2020 at 12:01 AM Michael Thomas  wrote:
>
> After getting past the proxy issue, I was finally able to run the
> engine-setup --reconfigure-optional-components.  The new
> ManagedBlockStorage storage domain exists, and I was able to create a
> disk.  However, I am unable to attach the disk to a running VM.
>
> The engine.log shows the following, with a reference to a possible
> cinderlib error ("cinderlib execution failed"):
>
> 2020-10-13 15:15:23,508-05 INFO
> [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13)
> [c73386d0-a713-4c37-bc9b-e7c4f9083f78] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[grafana=VM_NAME]',
> sharedLocks='[5676d441-660e-4d9f-a586-e53ff0ea054b=VM]'}'
> 2020-10-13 15:15:23,522-05 INFO
> [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13)
> [c73386d0-a713-4c37-bc9b-e7c4f9083f78] Running command: UpdateVmCommand
> internal: false. Entities affected :  ID:
> 5676d441-660e-4d9f-a586-e53ff0ea054b Type: VMAction group
> EDIT_VM_PROPERTIES with role type USER
> 2020-10-13 15:15:23,536-05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-13) [c73386d0-a713-4c37-bc9b-e7c4f9083f78] EVENT_ID:
> USER_UPDATE_VM(35), VM grafana configuration was updated by
> michael.thomas@internal-authz.
> 2020-10-13 15:15:23,539-05 INFO
> [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13)
> [c73386d0-a713-4c37-bc9b-e7c4f9083f78] Lock freed to object
> 'EngineLock:{exclusiveLocks='[grafana=VM_NAME]',
> sharedLocks='[5676d441-660e-4d9f-a586-e53ff0ea054b=VM]'}'
> 2020-10-13 15:15:24,129-05 INFO
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [f8829338-b040-46d0-a838-3cf28869637c] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]',
> sharedLocks=''}'
> 2020-10-13 15:15:24,147-05 INFO
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [f8829338-b040-46d0-a838-3cf28869637c] Running command:
> AttachDiskToVmCommand internal: false. Entities affected :  ID:
> 5676d441-660e-4d9f-a586-e53ff0ea054b Type: VMAction group
> CONFIGURE_VM_STORAGE with role type USER,  ID:
> 5419640e-445f-4b3f-a29d-b316ad031b7a Type: DiskAction group ATTACH_DISK
> with role type USER
> 2020-10-13 15:15:24,152-05 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.ConnectManagedBlockStorageDeviceCommand]
> (default task-13) [7cb262cc] Running command:
> ConnectManagedBlockStorageDeviceCommand internal: true.
> 2020-10-13 15:15:26,006-05 ERROR
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> (default task-13) [7cb262cc] cinderlib execution failed:
> 2020-10-13 15:15:26,011-05 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (default task-13) [7cb262cc] START, HotPlugDiskVDSCommand(HostName =
> ovirt4-mgmt.ldas.ligo-la.caltech.edu,
> HotPlugDiskVDSParameters:{hostId='61da4cdf-638b-4cbd-9921-5be820998d31',
> vmId='5676d441-660e-4d9f-a586-e53ff0ea054b',
> diskId='5419640e-445f-4b3f-a29d-b316ad031b7a'}), log id: 660ebc9e
> 2020-10-13 15:15:26,012-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (default task-13) [7cb262cc] Failed in 'HotPlugDiskVDS' method, for vds:
> 'ovirt4-mgmt.ldas.ligo-la.caltech.edu'; host:
> 'ovirt4-mgmt.ldas.ligo-la.caltech.edu': null
> 2020-10-13 15:15:26,012-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (default task-13) [7cb262cc] Command 'HotPlugDiskVDSCommand(HostName =
> ovirt4-mgmt.ldas.ligo-la.caltech.edu,
> HotPlugDiskVDSParameters:{hostId='61da4cdf-638b-4cbd-9921-5be820998d31',
> vmId='5676d441-660e-4d9f-a586-e53ff0ea054b',
> diskId='5419640e-445f-4b3f-a29d-b316ad031b7a'})' execution failed: null
> 2020-10-13 15:15:26,012-05 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (default task-13) [7cb262cc] FINISH, HotPlugDiskVDSCommand, return: ,
> log id: 660ebc9e
> 2020-10-13 15:15:26,012-05 ERROR
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] Command
> 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed:
> EngineException: java.lang.NullPointerException (Failed with error
> ENGINE and code 5001)
> 2020-10-13 15:15:26,013-05 ERROR
> [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
> task-13) [7cb262cc] 

[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-14 Thread Yedidyah Bar David
On Wed, Oct 14, 2020 at 11:09 AM Yedidyah Bar David  wrote:
>
> On Tue, Oct 13, 2020 at 6:07 PM  wrote:
> >
> > I hope it is not too forward of me to only send it to you.
>
> It's not about being "too forward" or not.
>
> oVirt's support is intentionally on a public mailing list, so that:
>
> 1. Everyone can (try to) help. And many do, including people outside
> of Red Hat (which funds (almost?) all of the development). And I'll use
> this opportunity to say to all those that take a part: Thank you! :-)
>
> 2. Everyone can learn, also after the case, by searching the list archive.
>
> I think almost all large/successful open-source projects follow similar
> practices, largely for the same reasons.
>
> This does have a price - people that want to get help, do have to pick
> one of:
>
> 1. Give up on (part of) their privacy/secrecy, by revealing domain names,
> IP addresses, etc. publicly

Just one (important) note: This should not include passwords, secret keys,
etc.

I obviously recommend to verify that any logs or data you share publicly
does not include these.

If log files include them, we (usually?) consider this a bug. If you find
one, please report it. Thanks!

Best regards,

>
> 2. Reproduce all issues on a test system with random domain names and
> private IP addresses
>
> 3. Replace all occurrences of sensitive/private items in the logs they
> share
>
> 4. Find someone that agrees to help them in private, see e.g.:
>
> https://www.ovirt.org/community/user-stories/users-and-providers.html
>
> Since your specific issue is installation-related, and you still have
> no data, I think it would be reasonable to assume that it's not too hard
> for you to reproduce the issue with different names/addresses and
> share the log (meaning, pick (2.) above).
>
> Best regards,
>
> >
> > Yours Sincerely,
> >
> > Henni
> >
> >
> > -Original Message-
> > From: Yedidyah Bar David 
> > Sent: Tuesday, 13 October 2020 16:42
> > To: i...@worldhostess.com
> > Cc: Edward Berger ; users 
> > Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
> > frustrated]
> >
> > Please share all of /var/log/ovirt-hosted-engine-setup:
> >
> > cd /var/log/
> > tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup
> >
> > Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing service 
> > (e.g. dropbox, google drive etc.) and share the link.
> >
> > Thanks!
> >
> > On Tue, Oct 13, 2020 at 10:56 AM  wrote:
> > >
> > > Hope this can help. It seems it crash every time when I install.
> > --
> > Didi
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
> > https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KYHQ622XUBYCTVZLQZ4AGLKT2R/
> >
>
>
> --
> Didi



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGHRSFIWOOIFDECBTLOEIYRHLDJY26BS/


[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-14 Thread Yedidyah Bar David
On Tue, Oct 13, 2020 at 6:07 PM  wrote:
>
> I hope it is not too forward of me to only send it to you.

It's not about being "too forward" or not.

oVirt's support is intentionally on a public mailing list, so that:

1. Everyone can (try to) help. And many do, including people outside
of Red Hat (which funds (almost?) all of the development). And I'll use
this opportunity to say to all those that take a part: Thank you! :-)

2. Everyone can learn, also after the case, by searching the list archive.

I think almost all large/successful open-source projects follow similar
practices, largely for the same reasons.

This does have a price - people that want to get help, do have to pick
one of:

1. Give up on (part of) their privacy/secrecy, by revealing domain names,
IP addresses, etc. publicly

2. Reproduce all issues on a test system with random domain names and
private IP addresses

3. Replace all occurrences of sensitive/private items in the logs they
share

4. Find someone that agrees to help them in private, see e.g.:

https://www.ovirt.org/community/user-stories/users-and-providers.html

Since your specific issue is installation-related, and you still have
no data, I think it would be reasonable to assume that it's not too hard
for you to reproduce the issue with different names/addresses and
share the log (meaning, pick (2.) above).

Best regards,

>
> Yours Sincerely,
>
> Henni
>
>
> -Original Message-
> From: Yedidyah Bar David 
> Sent: Tuesday, 13 October 2020 16:42
> To: i...@worldhostess.com
> Cc: Edward Berger ; users 
> Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
> frustrated]
>
> Please share all of /var/log/ovirt-hosted-engine-setup:
>
> cd /var/log/
> tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup
>
> Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing service 
> (e.g. dropbox, google drive etc.) and share the link.
>
> Thanks!
>
> On Tue, Oct 13, 2020 at 10:56 AM  wrote:
> >
> > Hope this can help. It seems it crash every time when I install.
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KYHQ622XUBYCTVZLQZ4AGLKT2R/
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4FRNLBM7HRFQ7HD4KRGNIGXZSPGHV6PN/


[ovirt-users] ovirt question

2020-10-14 Thread 陳星宇
Dear Sir:

Can I ask you questions about ovirt?

How should this system be installed?
Louis Chen 陳星宇
Tel : +886-37-586-896  Ext.1337
Fax : +886-37-587-699
[描述: 描述: 描述: 02]
louissy_c...@phison.com
www.phison.comhttp://www.phison.com/>
群聯電子股份有限公司
Phison Electronics Corps.
苗栗縣350竹南鎮群義路1號



This message and any attachments are confidential and may be legally 
privileged. Any unauthorized review, use or distribution by anyone other than 
the intended recipient is strictly prohibited. If you are not the intended 
recipient, please immediately notify the sender, completely delete the message 
and any attachments, and destroy all copies. Your cooperation will be highly 
appreciated.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QWXPE76KMIMXPOMFP5P37FVO3MOZ4IJ/