[ovirt-users] Re: oVirt 4.4.2.6-1.el8 (SHE). Grafana integration not configured. The link to the Monitoring portal is not displayed on the Manager home page.

2020-10-15 Thread Yedidyah Bar David
On Tue, Oct 13, 2020 at 5:22 PM Dmitry Kharlamov  wrote:
>
> Many thanks, Didi, Gianluca!
>
> Via Invite + usern...@ad.domain.name Everything worked out! )))
>
> Is it possible to use a file /etc/grafana/ldap.toml for configure 
> authentication in the Active Directory?

I have no idea, sorry.

I think this won't work. Grafana is not configured to use ldap
directly, but to use SSO against the engine.
If you configure the engine to use ldap, you get "indirect ldap
support" also in grafana.

If you want separate/different ldap configuration of grafana and the
engine, I think nothing prevents you from doing that - see also [1],
might be relevant/needed - but then SSO with the engine won't work
(but other SSO might work if you configure stuff so - e.g. kerberos -
didn't check grafan's support for that, though).

To do that, you'll need to configure ldap.toml as you mention, and
also set 'enabled = true', which might be overwritten on future
engine-setup runs (e.g. for ugprades), until [1] is fixed (and then
it's also still not clear what we'll do on upgrades from current to
post-[1]. Feel free to comment there if you have concrete ideas).

Best regards,

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1835177 depends on
https://bugzilla.redhat.com/show_bug.cgi?id=1835168 depends on
https://github.com/grafana/grafana/issues/17653
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/24K67VDS5PN23HEYZKY6Q445ZGKGXGYI/


[ovirt-users] Re: problems installing standard Linux as nodes in 4.4

2020-10-15 Thread Gianluca Cecchi
On Tue, Oct 13, 2020 at 12:06 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 10, 2020 at 10:13 AM Martin Perina  wrote:
>
> [snip]
>
>
>>> Can I replicate the command that the engine would run on host through
>>> ssh?
>>>
>>
>> I don't think so there is an easy way to do it
>> Let's see what else we can get from the logs...
>>
>> Martin
>>
>>
> Hi,
> I've run on engine the command
> ovirt-log-collector --no-hypervisors
> but potentially there is much sensitive information (like the dump of the
> database).
>
> Is there any particular file you are more interested in that archive I can
> share?
>
> BTW: can I put engine in debug for the time I'm trying to add the host so
> that we can see if more messages are shown?
> In that case how can I do?
>
> Another information I have noticed is that when the new host command from
> web admin GUI suddenly fails, anyway the ov200 host is now present in the
> host list, with the down icon and "Install failed" info.
> If I click on it and go in General subtab, in the section "Action Items" I
> see 3 items with exclamation mark in front of them:
>
> 1) Power Management is not configured for this Host.
> Enable Power Management
> --> OK, I skipped it
>
> 2) Host has no default route.
> ---> I don't know why it says this.
>
> [root@ov200 log]# ip route show
> default via 10.4.192.254 dev bond0.68 proto static metric 400
> 10.4.192.0/24 dev bond0.68 proto kernel scope link src 10.4.192.32 metric
> 400
> 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
> linkdown
> [root@ov200 log]#
>
> On the still in CentOS 7 active host I have:
>
> [root@ov300 ~]# ip route show
> default via 10.4.192.254 dev ovirtmgmntZ2Z3
> 10.4.187.0/24 dev p1p2.187 proto kernel scope link src 10.4.187.100
> 10.4.192.0/24 dev ovirtmgmntZ2Z3 proto kernel scope link src 10.4.192.33
> 10.10.100.0/24 dev p1p2 proto kernel scope link src 10.10.100.88
> 10.10.100.0/24 dev p1p1.100 proto kernel scope link src 10.10.100.87
> [root@ov300 ~]#
>
> [root@ov300 ~]# brctl show ovirtmgmntZ2Z3
> bridge name bridge id STP enabled interfaces
> ovirtmgmntZ2Z3 8000.1803730ba369 no bond0.68
> [root@ov300 ~]#
>
> Could it be the fact that for historical reasons my mgmt network has not
> the name ovirtmgmt but ovirtmgmntZ2Z3 that confuses the installer that
> expects ovirtmgmt to setup? And erroneously reports the no default route
> message?
>
> 3) The host CPU does not match the Cluster CPU Type and is running in a
> degraded mode. It is missing the following CPU flags: vmx, ssbd, nx,
> model_Westmere, aes, spec_ctrl. Please update the host CPU microcode or
> change the Cluster CPU Type.
>
> The cluster is set as "Intel Westmere IBRS SSBD Family".
> all the hosts are the same hw Dell PE M610, with same processor
>
> Host installed in CentOS 8:
> [root@ov200 log]# cat /proc/cpuinfo | grep "model name" | sort -u
> model name : Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
> [root@ov200 log]#
>
> Host still in CentOS 7:
> [root@ov300 ~]# cat /proc/cpuinfo | grep "model name" | sort -u
> model name : Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
> [root@ov300 ~]#
>
> If I compare the cpu flags inside the OS I see:
>
> CentOS 8:
> [root@ov200 log]# cat /proc/cpuinfo | grep flags | sort -u
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
> nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx
> est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm pti
> ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat
> flush_l1d
> [root@ov200 log]#
>
> CentOS 7:
> [root@ov300 ~]# cat /proc/cpuinfo | grep flags | sort -u
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
> nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx
> est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ssbd
> ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat
> spec_ctrl intel_stibp flush_l1d
> [root@ov300 ~]#
>
> When still in CentOS 7, ov200 had the same flags as ov300
> ov200 has this more now:
> cpuid pti
>
> ov200 has these less now:
> eagerfpu spec_ctrl intel_stibp
>
> Gianluca
>

Any feedback on my latest comments?
In the meantime here:
https://drive.google.com/file/d/1iN37znRtCo2vgyGTH_ymLhBJfs-2pWDr/view?usp=sharing
you can find inside the sosreport in tar.gz format, where I have modified
some file names and context in respect of hostnames.
The only file I have not put inside is the dump of the database, but I can
run any query you like in case.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 

[ovirt-users] Re: Gluster Domain Storage full

2020-10-15 Thread suporte
Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Any option to extend the Gluster Volume ? 

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host. 
Then you can start the VM , while you are recovering from the situation. 

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml 

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy). 

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size. 

Best Regards, 
Strahil Nikolov 



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil, 

I just set cluster.min-free-disk to 1%: 
# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: node2.domain.com:/home/brick1 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain 
I restarted the glusterfs volume. 
But I can not do anything with the VM disk. 


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM. 
we should continue to have access to the VM to free some space. 

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem. 

Any idea? 

Thanks 

José 



 
De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10 
Assunto: Re: [ovirt-users] Gluster Domain Storage full 

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option. 
You can power off the VM , then set cluster.min-free-disk 
to 1% and immediately move any of the VM's disks to another storage domain. 

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain. 

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ? 

Best Regards, 
Strahil Nikolov 








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello, 

I'm running oVirt Version 4.3.4.3-1.el7. 
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM. 
The VM filled all the Domain storage. 
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used 

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message: 

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain 

Any idea? 

Thanks 

-- 
 
Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIJUP2HZIWRSQHN4XU3BGGT2ZDKEVJZ3/
 

[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-15 Thread Yedidyah Bar David
On Thu, Oct 15, 2020 at 2:06 PM  wrote:
>
> Sorry, did not share it with world, try now it should work
>
> https://drive.google.com/drive/folders/1XpbBqogokvkRgX0INfXVd7FtPuoL5P1m?usp=sharing

This only includes the files directly in that directory, not in sub-directories.
Can you please share all of it? To do this with zip instead of tar,
you can pass '-r'.

The current logs do not reveal more than you already shared before.

Also, in case the newest engine-logs-* directory there is empty, you can try
connecting to the engine vm (you can find its temporary IP address in the logs
by searching for 'local_vm_ip') with ssh as root and the password you supplied,
and then check there /var/log/ovirt-engine. If possible, please share
it as well.

Thanks and best regards,

>
>
> Yours Sincerely,
>
> Henni
>
>
> -Original Message-
> From: Yedidyah Bar David 
> Sent: Thursday, 15 October 2020 18:37
> To: i...@worldhostess.com
> Cc: Edward Berger ; users 
> Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
> frustrated]
>
> On Thu, Oct 15, 2020 at 1:34 PM  wrote:
> >
> > A zip file with my ovirt-hosted-engine-setup logs -- hope someone can tell 
> > me what I am doing wrong.
> >
> > https://drive.google.com/file/d/1Y6_3kV7L2W-37-sgyA5EQ_v7NHyQ5zR2/view
> > ?usp=sharing
>
> I get "Access Denied" for the above link.
>
> Best regards,
>
> >
> > Yours Sincerely,
> >
> > Henni
> >
> >
> > -Original Message-
> > From: Yedidyah Bar David 
> > Sent: Tuesday, 13 October 2020 16:42
> > To: i...@worldhostess.com
> > Cc: Edward Berger ; users 
> > Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days
> > [newbie & frustrated]
> >
> > Please share all of /var/log/ovirt-hosted-engine-setup:
> >
> > cd /var/log/
> > tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup
> >
> > Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing service 
> > (e.g. dropbox, google drive etc.) and share the link.
> >
> > Thanks!
> >
> > On Tue, Oct 13, 2020 at 10:56 AM  wrote:
> > >
> > > Hope this can help. It seems it crash every time when I install.
> > --
> > Didi
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org Privacy
> > Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KY
> > HQ622XUBYCTVZLQZ4AGLKT2R/
> >
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TCVP6FE2MRENFGNAPHKE2CBZJHAKYEQ5/
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QPI5H72Y4UMRBJ2SW4A5HMPJ3X5HRE2/


[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-15 Thread Strahil Nikolov via Users
>Please clarify what are the disk groups that you are referring to? 
Either Raid5/6 or Raid10 with a HW controller(s).


>Regarding your statement  "In JBOD mode, Red Hat support only 'replica 3' 
>>volumes." does this also mean "replica 3" variants ex. 
>"distributed-replicate" 
Nope, As far as I know - only when you have 3 copies of the data ('replica 3' 
only).

Best Regards,
Strahil Nikolov


On Wed, Oct 14, 2020 at 7:34 AM C Williams  wrote:
> Thanks Strahil !
> 
> More questions may follow. 
> 
> Thanks Again For Your Help !
> 
> On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov  
> wrote:
>> Imagine you got a host with 60 Spinning Disks -> I would recommend you to 
>> split it to 10/12 disk groups and these groups will represent several bricks 
>> (6/5).
>> 
>> Keep in mind that when you start using many (some articles state hundreds , 
>> but no exact number was given) bricks , you should consider brick 
>> multiplexing (cluster.brick-multiplex).
>> 
>> So, you can use as many bricks you want , but each brick requires cpu time 
>> (separate thread) , tcp port number and memory.
>> 
>> In my setup I use multiple bricks in order to spread the load via LACP over 
>> several small (1GBE) NICs.
>> 
>> 
>> The only "limitation" is to have your data on separate hosts , so when you 
>> create the volume it is extremely advisable that you follow this model:
>> 
>> hostA:/path/to/brick
>> hostB:/path/to/brick
>> hostC:/path/to/brick
>> hostA:/path/to/brick2
>> hostB:/path/to/brick2
>> hostC:/path/to/brick2
>> 
>> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that 
>> in mind.
>> 
>> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks 
>> should be in a raid of some type (maybe RAID10 for perf).
>> 
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hello,
>> 
>> I am getting some questions from others on my team.
>> 
>> I have some hosts that could provide up to 6 JBOD disks for oVirt data (not 
>> arbiter) bricks 
>> 
>> Would this be workable / advisable ?  I'm under the impression there should 
>> not be more than 1 data brick per HCI host .
>> 
>> Please correct me if I'm wrong.
>> 
>> Thank You For Your Help !
>> 
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M47YCSFNYYNPYTR7Z3TC63ZSVIR7QUGG/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-15 Thread Nir Soffer
On Thu, Oct 15, 2020 at 3:20 PM Gilboa Davara  wrote:
>
> On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer  wrote:
>>
>> > I've got room to spare.
>> > Any documentation on how to achieve this (or some pointers where to look)?
>>
>> It should be documented in ovirt.org, and in RHV documentation:
>> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/
>>
>> > I couldn't find LVM / block device under host devices / storage domain / 
>> > etc and Google search returned irrelevant results.
>>
>> I tested locally, LVM devices are not available in:
>> Compute > Hosts > {hostname} > Host Devices
>>
>> Looks like libvirt does not support device mapper devices. You can try:
>> # virsh -r nodedev-list
>>
>> To see supported devices. The list seems to match what oVirt displays
>> in the Host Devices tab.
>>
>> So you only option it to attach the entire local device to the VM, either 
>> using
>> pci passthrough or as a scsi disk.
>>
>> Nir
>
>
> Full SCSI passthrough per "desktop" VM is an overkill for this user case. 
> (Plus, I don't see MD devices in the list, only pure SATA/SAS devices).
> Any idea if there are plans to add support for LVM devices (or any other 
> block device)?

I don't think there is such a plan, but it makes sense to support such usage.

Please file ovit-engine RFE explaining the use case, and we can consider
it for a future version.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WHDHGZ6TZ5S4VQT3FTYI7COYTIEX2DQ/


[ovirt-users] Upgrade from 4.3.9 to 4.3.10 leads to sanlock errors

2020-10-15 Thread mschuler
After upgrading to 4.3.10 from 4.3.9, when doing VM backups after hours, we 
started to have VMs freeze/pause during nightly backup runs.  We expect the 
increased load exposes the issue.

We reverted the hosts back to 4.3.9 and the problem went away, after some 
testing using a single host on .10 , I am seeing the below error in sanlock.log:
2020-10-14 16:06:12 2939 [5724]: 95bd5893 aio timeout RD 
0x7f7f9c0008c0:0x7f7f9c0008d0:0x7f7fa9efb000 ioto 10 to_count 1
2020-10-14 16:06:12 2939 [5724]: s2 delta_renew read timeout 10 sec offset 0 
/dev/95bd5893-83d4-42f2-b333-1c65226f1d09/ids
2020-10-14 16:06:12 2939 [5724]: s2 renewal error -202 delta_length 10 
last_success 2908
2020-10-14 16:06:14 2941 [5724]: 95bd5893 aio collect RD 
0x7f7f9c0008c0:0x7f7f9c0008d0:0x7f7fa9efb000 result 1048576:0 match reap

So engine is still at 4.3.10.  We also see the error below in messages:
Oct 14 16:09:20 HOSTNAME kernel: perf: interrupt took too long (2509 > 2500), 
lowering kernel.perf_event_max_sample_rate to 79000

I guess my question is two fold, how do I go about troubleshooting this 
further. Otherwise would it be better/possible to move to 4.4.2 (or 4.4.3 when 
released.)  Do all hosts have to be on 4.3.10, or can the hosts be on 4.3.9 
while engine is 4.3.10 to do the migration?

Thank you!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XY3PASP45GAO3YNPJLWMYB3AUZ6443SX/


[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-15 Thread C Williams
Thank You Strahil !

On Thu, Oct 15, 2020 at 8:05 AM Strahil Nikolov 
wrote:

> >Please clarify what are the disk groups that you are referring to?
> Either Raid5/6 or Raid10 with a HW controller(s).
>
>
> >Regarding your statement  "In JBOD mode, Red Hat support only 'replica 3'
> >volumes." does this also mean "replica 3" variants ex.
> >"distributed-replicate"
> Nope, As far as I know - only when you have 3 copies of the data ('replica
> 3' only).
>
> Best Regards,
> Strahil Nikolov
>
>
> On Wed, Oct 14, 2020 at 7:34 AM C Williams 
> wrote:
> > Thanks Strahil !
> >
> > More questions may follow.
> >
> > Thanks Again For Your Help !
> >
> > On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov 
> wrote:
> >> Imagine you got a host with 60 Spinning Disks -> I would recommend you
> to split it to 10/12 disk groups and these groups will represent several
> bricks (6/5).
> >>
> >> Keep in mind that when you start using many (some articles state
> hundreds , but no exact number was given) bricks , you should consider
> brick multiplexing (cluster.brick-multiplex).
> >>
> >> So, you can use as many bricks you want , but each brick requires cpu
> time (separate thread) , tcp port number and memory.
> >>
> >> In my setup I use multiple bricks in order to spread the load via LACP
> over several small (1GBE) NICs.
> >>
> >>
> >> The only "limitation" is to have your data on separate hosts , so when
> you create the volume it is extremely advisable that you follow this model:
> >>
> >> hostA:/path/to/brick
> >> hostB:/path/to/brick
> >> hostC:/path/to/brick
> >> hostA:/path/to/brick2
> >> hostB:/path/to/brick2
> >> hostC:/path/to/brick2
> >>
> >> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep
> that in mind.
> >>
> >> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning
> disks should be in a raid of some type (maybe RAID10 for perf).
> >>
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >>
> >>
> >>
> >>
> >>
> >> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
> >>
> >>
> >>
> >>
> >>
> >> Hello,
> >>
> >> I am getting some questions from others on my team.
> >>
> >> I have some hosts that could provide up to 6 JBOD disks for oVirt data
> (not arbiter) bricks
> >>
> >> Would this be workable / advisable ?  I'm under the impression there
> should not be more than 1 data brick per HCI host .
> >>
> >> Please correct me if I'm wrong.
> >>
> >> Thank You For Your Help !
> >>
> >>
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
> >>
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIXPK66DMDN24WQQWXEJJAHZ54ATQDLG/


[ovirt-users] Re: Connection failed

2020-10-15 Thread Sandro Bonazzola
Il giorno ven 2 ott 2020 alle ore 04:40  ha scritto:

> Messages related to the failure might be found in the journal “journalctl
> -u cockpit”
>
>
>
> This is the output
>
>
>
> node01.xxx.co.za cockpit-tls[8249]: cockpit-tls: gnutls_handshake failed:
> A TLS fatal alert has been received.
>
>
>
> Any suggestion will be appreciated as I struggle for days to get oVirt to
> work and I can see it is still a long way for me to get an operational
> solution.
>

Can you please provide a sos report from that host?
Thanks,



>
>
> *Henni *
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGVNMZHJOXD6X3TDLNYHOIXN5X5UQPYU/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*


* *
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JL5JL5IOPLPF2ZDL4CYUCY3HFAL7L2JO/


[ovirt-users] Re: problems installing standard Linux as nodes in 4.4

2020-10-15 Thread Gianluca Cecchi
On Thu, Oct 15, 2020 at 10:41 AM Gianluca Cecchi 
wrote:

>
>
> Any feedback on my latest comments?
> In the meantime here:
>
> https://drive.google.com/file/d/1iN37znRtCo2vgyGTH_ymLhBJfs-2pWDr/view?usp=sharing
> you can find inside the sosreport in tar.gz format, where I have modified
> some file names and context in respect of hostnames.
> The only file I have not put inside is the dump of the database, but I can
> run any query you like in case.
>
> Gianluca
>
>

I have also tried to put debug into the engine.
Method used base on this link:

https://www.ovirt.org/develop/developer-guide/engine/engine-development-environment.html

and I used engine.core as the package

[root@ovmgr1 ~]# diff ovirt-engine.xml.in ovirt-engine.xml.in.debug
118c118
< 
---
> 
197a198,200
>   
> 
>   
[root@ovmgr1 ~]#

When the install fails I get this in engine.log now:

2020-10-15 12:16:15,394+02 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-31)
[c439aded-ade3-4474-a5f1-2f074ed5d920] EVENT_ID:
VDS_ANSIBLE_INSTALL_STARTED(560), Ansible host-deploy playbook execution
has started on host ov200.
2020-10-15 12:16:15,412+02 ERROR
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engine-Thread-31)
[c439aded-ade3-4474-a5f1-2f074ed5d920] Exception: Failed to execute call to
start playbook.
2020-10-15 12:16:15,412+02 DEBUG
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engine-Thread-31)
[c439aded-ade3-4474-a5f1-2f074ed5d920] Exception: :
org.ovirt.engine.core.common.utils.ansible.AnsibleRunnerCallException:
Failed to execute call to start playbook.
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.common.utils.ansible.AnsibleRunnerHTTPClient.runPlaybook(AnsibleRunnerHTTPClient.java:153)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor.runCommand(AnsibleExecutor.java:113)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor.runCommand(AnsibleExecutor.java:78)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.runAnsibleHostDeployPlaybook(InstallVdsInternalCommand.java:281)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.executeCommand(InstallVdsInternalCommand.java:145)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1169)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1327)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2003)
at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:140)
at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:79)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1387)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:419)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.runCommands(PrevalidatingMultipleActionsRunner.java:176)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182)
at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at
org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)

2020-10-15 12:16:15,412+02 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-31)
[c439aded-ade3-4474-a5f1-2f074ed5d920] Host installation failed for host
'79da834f-d03a-4abc-b89e-8ad0186c173c', 'ov200': Failed to execute Ansible
host-deploy role: Failed to execute call to start playbook. . Please check
logs 

[ovirt-users] Re: oVirt 4.4.2.6-1.el8 (SHE). Grafana integration not configured. The link to the Monitoring portal is not displayed on the Manager home page.

2020-10-15 Thread Dmitry Kharlamov
I think it is worth waiting for the completion of the development of the 
SSO+LDAP for the engine.

Thank you very much for your help!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FB6JJTAWCIHLQFT3D3654JAOOX2QGX6P/


[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-15 Thread info
A zip file with my ovirt-hosted-engine-setup logs -- hope someone can tell me 
what I am doing wrong.

https://drive.google.com/file/d/1Y6_3kV7L2W-37-sgyA5EQ_v7NHyQ5zR2/view?usp=sharing

Yours Sincerely,
 
Henni 


-Original Message-
From: Yedidyah Bar David  
Sent: Tuesday, 13 October 2020 16:42
To: i...@worldhostess.com
Cc: Edward Berger ; users 
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

Please share all of /var/log/ovirt-hosted-engine-setup:

cd /var/log/
tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup

Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing service (e.g. 
dropbox, google drive etc.) and share the link.

Thanks!

On Tue, Oct 13, 2020 at 10:56 AM  wrote:
>
> Hope this can help. It seems it crash every time when I install.
--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KYHQ622XUBYCTVZLQZ4AGLKT2R/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MYCBTJN66GFQAVZ7HBUUVBKI555UA7E7/


[ovirt-users] New host: no network interfaces visible

2020-10-15 Thread Richard Chan
What could cause no Network Interfaces visible in installing a new host?

I have added a new host to oVirt 4.3.10, the initial SSH installs all the
packages (all with failed=0 in host-deploy/*log). The installation is shown
as failed without any meaningful message.

However the Network Interfaces page is blank, what could cause this?

Host in CentOS 7.8 tried with/without biosdevname.


-- 
Richard Chan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWXJCPL6CNNQN37RY4DUGVGDBOTDK23B/


[ovirt-users] Re: Collectd version downgrade on oVirt engine

2020-10-15 Thread Sandro Bonazzola
Il giorno mar 13 ott 2020 alle ore 13:17  ha
scritto:

> i am trying to downgrade collectd version on oVirt engine from 5.10.0 to
> 5.8.4 using ansible, but getting error while doing so. Can someone help to
> fix issue
>


Hi, can you please share the use case for this downgrade?
Looping in +Matthias Runge  from Centos OpsTools who may
be interested reading about this.



>
> --
>
> Ansible yml file
>
> ---
>
> - name: Perform a yum clean
>   command: /usr/bin/yum clean all
>
> - name: downgrade collectd version to 5.8.1
>   yum:
> name:
>   - collectd-5.8.1-4.el7.x86_64
>   - collectd-disk-5.8.1-4.el7.x86_64
> state: present
> allow_downgrade: true
> update_cache: true
>   become: true
>
> - error
>
>  {"changed": false, "changes": {"installed":
> ["collectd-5.8.1-4.el7.x86_64"]}, "msg": "Error: Package:
> collectd-write_http-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n
>  Requires: collectd(x86-64) = 5.10.0-2.el7\n   Removing:
> collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.10.0-2.el7\n   Downgraded By:
> collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.1-4.el7\n   Available:
> collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.7.2-1.el7\n   Available:
> collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.7.2-3.el7\n   Available:
> collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.0-2.el7\n   Available:
> collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.0-3.el7\n   Available:
> collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.0-5.el7\n   Available:
> collectd-5.8.0-6.1.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.0-6.1.el7\n   Available:
> collectd-5.8.1-1.el7.x86_64 (epel)\n   collectd(x86-64) =
> 5.8.1-1.el7\n   Available: collectd-5.8.1-2.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.1-2.el7\n   Available: collectd-5.8.1-3.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.1-3.el7\n   Available: collectd-5.8.1-5.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.1-5.el7\nError: Package: collectd-disk-5.10.0-2.el7.x86_64
> (@ovirt-4.3-centos-opstools)\n   Requires: collectd(x86-64) =
> 5.10.0-2.el7\n   Removing: collectd-5.10.0-2.el7.x86_64
> (@ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.10.0-2.el7\n   Downgraded By: collectd-5.8.1-4.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.1-4.el7\n   Available: collectd-5.7.2-1.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.7.2-1.el7\n   Available: collectd-5.7.2-3.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.7.2-3.el7\n   Available: collectd-5.8.0-2.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.0-2.el7\n   Available: collectd-5.8.0-3.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.0-3.el7\n   Available: collectd-5.8.0-5.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.0-5.el7\n   Available: collectd-5.8.0-6.1.el7.x86_64
> (ovirt-4.3-centos-opstools)\n   collectd(x86-64) =
> 5.8.0-6.1.el7\n   Available: collectd-5.8.1-1.el7.x86_64 (epel)\n
>  collectd(x86-64) = 5.8.1-1.el7\n   Available:
> collectd-5.8.1-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.1-2.el7\n   A
> ▽
> vailable: collectd-5.8.1-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>collectd(x86-64) = 5.8.1-3.el7\n   Available:
> collectd-5.8.1-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.1-5.el7\nError: Package:
> collectd-postgresql-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n
>  Requires: collectd(x86-64) = 5.10.0-2.el7\n   Removing:
> collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.10.0-2.el7\n   Downgraded By:
> collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.1-4.el7\n   Available:
> collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.7.2-1.el7\n   Available:
> collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.7.2-3.el7\n   Available:
> collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.0-2.el7\n   Available:
> collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n
>  collectd(x86-64) = 5.8.0-3.el7\n   Available:
> collectd-5.8.0-5.el7.x86_64 

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think is) 
the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
size 100 GiB in 25600 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 70aab541cb331
block_name_prefix: rbd_data.70aab541cb331
format: 2
features: layering
op_features:
flags:
create_timestamp: Thu Oct 15 06:53:23 2020
access_timestamp: Thu Oct 15 06:53:23 2020
modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time it's a 
permissions issue on the ovirt node where the VM is running.  It looks 
like it can't read the temporary ceph config file that is sent over from 
the engine:


https://pastebin.com/pGjMTvcn

The file '/tmp/brickrbd_nwc3kywk' on the ovirt node is only accessible 
by root:


[root@ovirt4 ~]# ls -l /tmp/brickrbd_nwc3kywk
-rw---. 1 root root 146 Oct 15 07:25 /tmp/brickrbd_nwc3kywk

...and I'm guessing that it's being accessed by the vdsm user?

--Mike

On 10/14/20 10:59 AM, Michael Thomas wrote:

Hi Benny,

You are correct, I tried attaching to a running VM (which failed), then
tried booting a new VM using this disk (which also failed).  I'll use
the workaround in the bug report going forward.

I'll just recreate the storage domain, since at this point I have
nothing in it to lose.

Regards,

--Mike

On 10/14/20 9:32 AM, Benny Zlotnik wrote:

Did you attempt to start a VM with this disk and it failed, or you
didn't try at all? If it's the latter then the error is strange...
If it's the former there is a known issue with multipath at the
moment, see[1] for a workaround, since you might have issues with
detaching volumes which later, because multipath grabs the rbd devices
which would fail `rbd unmap`, it will be fixed soon by automatically
blacklisting rbd in multipath configuration.

Regarding editing, you can submit an RFE for this, but it is currently
not possible. The options are indeed to either recreate the storage
domain or edit the database table


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8




On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  wrote:


On 10/14/20 3:30 AM, Benny Zlotnik wrote:

Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of anything concrete.

Additionally, the path for the cinderlib log is
/var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
would appear in the vdsm.log on the relevant host, and would look
something like "RBD image feature set mismatch. You can disable
features unsupported by the kernel with 'rbd feature disable'"


Thanks for the pointer!  Indeed,
/var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I was
looking for.  In this case, it was a user error entering the RBDDriver
options:


2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown config
option use_multipath_for_xfer

...it should have been 'use_multipath_for_image_xfer'.

Now my attempts to fix it are failing...  If I go to 'Storage -> Storage
Domains -> Manage Domain', all driver options are unedittable except for
'Name'.

Then I thought that maybe I can't edit the driver options while a disk
still exists, so I tried removing the one disk in this domain.  But even
after multiple attempts, it still fails with:

2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in backend
2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
when trying to run command 'delete_volume': (psycopg2.IntegrityError)
update or delete on table "volumes" violates foreign key constraint
"volume_attachment_volume_id_fkey" on table "volume_attachment"
DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
referenced from table "volume_attachment".

See https://pastebin.com/KwN1Vzsp for the full log entries related to
this removal.

It's not lying, the volume no longer exists in the rbd pool, but the
cinder database still thinks it's attached, even though I was never able
to get it to attach to a VM.

What are my options for cleaning up this stale disk in the cinder database?

How can I update the driver options in my storage domain (deleting and
recreating the domain is acceptable, if possible)?

--Mike




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List 

[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-15 Thread Yedidyah Bar David
On Thu, Oct 15, 2020 at 1:34 PM  wrote:
>
> A zip file with my ovirt-hosted-engine-setup logs -- hope someone can tell me 
> what I am doing wrong.
>
> https://drive.google.com/file/d/1Y6_3kV7L2W-37-sgyA5EQ_v7NHyQ5zR2/view?usp=sharing

I get "Access Denied" for the above link.

Best regards,

>
> Yours Sincerely,
>
> Henni
>
>
> -Original Message-
> From: Yedidyah Bar David 
> Sent: Tuesday, 13 October 2020 16:42
> To: i...@worldhostess.com
> Cc: Edward Berger ; users 
> Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
> frustrated]
>
> Please share all of /var/log/ovirt-hosted-engine-setup:
>
> cd /var/log/
> tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup
>
> Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing service 
> (e.g. dropbox, google drive etc.) and share the link.
>
> Thanks!
>
> On Tue, Oct 13, 2020 at 10:56 AM  wrote:
> >
> > Hope this can help. It seems it crash every time when I install.
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KYHQ622XUBYCTVZLQZ4AGLKT2R/
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TCVP6FE2MRENFGNAPHKE2CBZJHAKYEQ5/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-15 Thread Nir Soffer
On Thu, Oct 15, 2020 at 8:45 AM Gilboa Davara  wrote:
>
> Hello Nir,
>
> Thanks for the prompt answer.
>
> On Wed, Oct 14, 2020 at 1:02 PM Nir Soffer  wrote:
>>
>>
>> GlusterFS?
>
>
> Yep, GlusterFS. Sorry, wrong abbreviation on my end..
>
>>
>>
>>
>> This will not be fast as local device passed-through to the vm
>>
>> It will also be problematic, since all hosts will mount, monitor, and 
>> maintain leases on this NFS storage, since it is considered as shared 
>> storage.
>> If another host fail to access this NFS storage the other host will be 
>> deactivated and all the VMs will migrate to other hosts. This migration 
>> storm can cause lot of trouble.
>> In the worst case, in no other host can access this NFS storage all other 
>> hosts will be deactivated.
>>
>> This is same as NFS (internally this is the same code). It will work only if 
>> you can mount the same device/export on all hosts. This is even worse than 
>> NFS.
>
>
> OK. Understood. No NFS / POSIXFS storage than.
>>
>>
>>>
>>> A. Am I barking at the wrong tree here? Is this setup even possible?
>>
>>
>> This is possible using host device.
>> You can attach a host device to a VM. This will pin the VM to the host, and 
>> give best performance.
>> It may not be flexible enough since you need to attach entire device. Maybe 
>> it can work with LVM logical volumes.
>
>
> I've got room to spare.
> Any documentation on how to achieve this (or some pointers where to look)?

It should be documented in ovirt.org, and in RHV documentation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/

> I couldn't find LVM / block device under host devices / storage domain / etc 
> and Google search returned irrelevant results.

I tested locally, LVM devices are not available in:
Compute > Hosts > {hostname} > Host Devices

Looks like libvirt does not support device mapper devices. You can try:
# virsh -r nodedev-list

To see supported devices. The list seems to match what oVirt displays
in the Host Devices tab.

So you only option it to attach the entire local device to the VM, either using
pci passthrough or as a scsi disk.

Nir




> - Gilboa
>
>>
>> Nir
>>
>>> B. If it is even possible, any documentation / pointers on setting up
>>> per-host private storage?
>>>
>>> I should mention that these workstations are quite beefy (64-128GB
>>> RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
>>> can even split the local storage and GFS to different arrays).
>>>
>>> - Gilboa
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TOJGWKBWXVNJIEGND56E264OZO6ICAY/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-15 Thread Gilboa Davara
On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer  wrote:

> > I've got room to spare.
> > Any documentation on how to achieve this (or some pointers where to
> look)?
>
> It should be documented in ovirt.org, and in RHV documentation:
>
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/
>
> > I couldn't find LVM / block device under host devices / storage domain /
> etc and Google search returned irrelevant results.
>
> I tested locally, LVM devices are not available in:
> Compute > Hosts > {hostname} > Host Devices
>
> Looks like libvirt does not support device mapper devices. You can try:
> # virsh -r nodedev-list
>
> To see supported devices. The list seems to match what oVirt displays
> in the Host Devices tab.
>
> So you only option it to attach the entire local device to the VM, either
> using
> pci passthrough or as a scsi disk.
>
> Nir
>

Full SCSI passthrough per "desktop" VM is an overkill for this user case.
(Plus, I don't see MD devices in the list, only pure SATA/SAS devices).
Any idea if there are plans to add support for LVM devices (or any other
block device)?

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VC4MWT2NMEUHER2OGXUBP7S3HC2GDBZA/


[ovirt-users] Re: Ovirt Node 4.4.2 install Odroid-H2 64GB eMMC

2020-10-15 Thread Sandro Bonazzola
Il giorno mer 14 ott 2020 alle ore 17:28  ha
scritto:

> Hi All,
>
> I'm trying to install node 4.4.2 on an eMMC card, but when I get to the
> storage configuration of the installer, it doesn't save the settings (which
> is automatic configuration) I have chosen and displays failed to save
> storage configuration. I have deleted all partitions on the card before
> trying to install and I still get the same error. The only way I can get it
> to go is select manual configuration with LVM thin provisioning and
> automatically create. Am I doing something wrong. I can install Centos 8 no
> issues on this, but not oVirt node 4.4.2.
>

Can you please open a bug and attach anaconda logs?


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*


* *
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFIEBBNGMDIGMGZOUO7C6QYCT3NIPY6T/


[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-15 Thread info
Sorry, did not share it with world, try now it should work

https://drive.google.com/drive/folders/1XpbBqogokvkRgX0INfXVd7FtPuoL5P1m?usp=sharing


Yours Sincerely,
 
Henni 


-Original Message-
From: Yedidyah Bar David  
Sent: Thursday, 15 October 2020 18:37
To: i...@worldhostess.com
Cc: Edward Berger ; users 
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

On Thu, Oct 15, 2020 at 1:34 PM  wrote:
>
> A zip file with my ovirt-hosted-engine-setup logs -- hope someone can tell me 
> what I am doing wrong.
>
> https://drive.google.com/file/d/1Y6_3kV7L2W-37-sgyA5EQ_v7NHyQ5zR2/view
> ?usp=sharing

I get "Access Denied" for the above link.

Best regards,

>
> Yours Sincerely,
>
> Henni
>
>
> -Original Message-
> From: Yedidyah Bar David 
> Sent: Tuesday, 13 October 2020 16:42
> To: i...@worldhostess.com
> Cc: Edward Berger ; users 
> Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days 
> [newbie & frustrated]
>
> Please share all of /var/log/ovirt-hosted-engine-setup:
>
> cd /var/log/
> tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup
>
> Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing service 
> (e.g. dropbox, google drive etc.) and share the link.
>
> Thanks!
>
> On Tue, Oct 13, 2020 at 10:56 AM  wrote:
> >
> > Hope this can help. It seems it crash every time when I install.
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy 
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KY
> HQ622XUBYCTVZLQZ4AGLKT2R/
>


--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TCVP6FE2MRENFGNAPHKE2CBZJHAKYEQ5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JX7JFUJPOMTYUBHI2EVBXFEOYC7WMR5/


[ovirt-users] How can I Disable console Single Sign On by default

2020-10-15 Thread Riccardo Brunetti
Dear all.
I'm trying to setup an oVirt environment able to provide VMs to a small number 
of users.
I'm just using the "internal" domain and I defined some users and groups using 
ovirt-aaa-jdbc-tool.
Everything seems to work, the users cal log-in into the VM portal and they can 
create VM.
The problem comes out when I try to access the VM console: in order to be able 
to open the noVNC console, I need to set Disable Single Sign On on the console 
settings of the VM.
But this is only possible using the Administrator portal and I couldn't find a 
way to do it "as user".

How can I allow users to open the noVNC console from the VM portal?
Is there a way to set  Disable Single Sign On by default?

Thanks a lot
Riccardo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/POQ76RL47NNCJPSJMXYKEZ2AOXYR2TJ7/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Jeff Bailey


On 10/15/2020 12:07 PM, Michael Thomas wrote:

On 10/15/20 10:19 AM, Jeff Bailey wrote:


On 10/15/2020 10:01 AM, Michael Thomas wrote:

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think 
is) the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
    size 100 GiB in 25600 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 70aab541cb331
    block_name_prefix: rbd_data.70aab541cb331
    format: 2
    features: layering
    op_features:
    flags:
    create_timestamp: Thu Oct 15 06:53:23 2020
    access_timestamp: Thu Oct 15 06:53:23 2020
    modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time 
it's a permissions issue on the ovirt node where the VM is running.  
It looks like it can't read the temporary ceph config file that is 
sent over from the engine:



Are you using octopus?  If so, the config file that's generated is 
missing the "[global]" at the top and octopus doesn't like that.  
It's been patched upstream.


Yes, I am using Octopus (15.2.4).  Do you have a pointer to the 
upstream patch or issue so that I can watch for a release with the fix?



https://bugs.launchpad.net/cinder/+bug/1865754


It's a simple fix.  I just changed line 100 of 
/usr/lib/python3.6/site-packages/os_brick/initiator/connectors/rbd.py to:


conf_file.writelines(["[global]", "\n", mon_hosts, "\n", keyring, "\n"])




--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KUO5OQWJDAHC25YLYVATQ6U2LLT6PH5Z/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Jeff Bailey


On 10/15/2020 10:01 AM, Michael Thomas wrote:

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think 
is) the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
    size 100 GiB in 25600 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 70aab541cb331
    block_name_prefix: rbd_data.70aab541cb331
    format: 2
    features: layering
    op_features:
    flags:
    create_timestamp: Thu Oct 15 06:53:23 2020
    access_timestamp: Thu Oct 15 06:53:23 2020
    modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time it's 
a permissions issue on the ovirt node where the VM is running.  It 
looks like it can't read the temporary ceph config file that is sent 
over from the engine:



Are you using octopus?  If so, the config file that's generated is 
missing the "[global]" at the top and octopus doesn't like that.  It's 
been patched upstream.





https://pastebin.com/pGjMTvcn

The file '/tmp/brickrbd_nwc3kywk' on the ovirt node is only accessible 
by root:


[root@ovirt4 ~]# ls -l /tmp/brickrbd_nwc3kywk
-rw---. 1 root root 146 Oct 15 07:25 /tmp/brickrbd_nwc3kywk

...and I'm guessing that it's being accessed by the vdsm user?

--Mike

On 10/14/20 10:59 AM, Michael Thomas wrote:

Hi Benny,

You are correct, I tried attaching to a running VM (which failed), then
tried booting a new VM using this disk (which also failed). I'll use
the workaround in the bug report going forward.

I'll just recreate the storage domain, since at this point I have
nothing in it to lose.

Regards,

--Mike

On 10/14/20 9:32 AM, Benny Zlotnik wrote:

Did you attempt to start a VM with this disk and it failed, or you
didn't try at all? If it's the latter then the error is strange...
If it's the former there is a known issue with multipath at the
moment, see[1] for a workaround, since you might have issues with
detaching volumes which later, because multipath grabs the rbd devices
which would fail `rbd unmap`, it will be fixed soon by automatically
blacklisting rbd in multipath configuration.

Regarding editing, you can submit an RFE for this, but it is currently
not possible. The options are indeed to either recreate the storage
domain or edit the database table


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8




On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas  
wrote:


On 10/14/20 3:30 AM, Benny Zlotnik wrote:

Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of anything concrete.

Additionally, the path for the cinderlib log is
/var/log/ovirt-engine/cinderlib/cinderlib.log, the error in this case
would appear in the vdsm.log on the relevant host, and would look
something like "RBD image feature set mismatch. You can disable
features unsupported by the kernel with 'rbd feature disable'"


Thanks for the pointer!  Indeed,
/var/log/ovirt-engine/cinderlib/cinderlib.log has the errors that I 
was

looking for.  In this case, it was a user error entering the RBDDriver
options:


2020-10-13 15:15:25,640 - cinderlib.cinderlib - WARNING - Unknown 
config

option use_multipath_for_xfer

...it should have been 'use_multipath_for_image_xfer'.

Now my attempts to fix it are failing...  If I go to 'Storage -> 
Storage
Domains -> Manage Domain', all driver options are unedittable 
except for

'Name'.

Then I thought that maybe I can't edit the driver options while a disk
still exists, so I tried removing the one disk in this domain.  But 
even

after multiple attempts, it still fails with:

2020-10-14 07:26:31,340 - cinder.volume.drivers.rbd - INFO - volume
volume-5419640e-445f-4b3f-a29d-b316ad031b7a no longer exists in 
backend

2020-10-14 07:26:31,353 - cinderlib-client - ERROR - Failure occurred
when trying to run command 'delete_volume': (psycopg2.IntegrityError)
update or delete on table "volumes" violates foreign key constraint
"volume_attachment_volume_id_fkey" on table "volume_attachment"
DETAIL:  Key (id)=(5419640e-445f-4b3f-a29d-b316ad031b7a) is still
referenced from table "volume_attachment".

See https://pastebin.com/KwN1Vzsp for the full log entries related to
this removal.

It's not lying, the volume no longer exists in the rbd pool, but the
cinder database still thinks it's attached, even though I was never 
able

to get it to attach to a VM.

What are my options for cleaning up this stale disk in the cinder 
database?


How can I update the driver options in my storage domain (deleting and
recreating the domain is acceptable, if possible)?

--Mike




___
Users 

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas

On 10/15/20 10:19 AM, Jeff Bailey wrote:


On 10/15/2020 10:01 AM, Michael Thomas wrote:

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think 
is) the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
    size 100 GiB in 25600 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 70aab541cb331
    block_name_prefix: rbd_data.70aab541cb331
    format: 2
    features: layering
    op_features:
    flags:
    create_timestamp: Thu Oct 15 06:53:23 2020
    access_timestamp: Thu Oct 15 06:53:23 2020
    modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time it's 
a permissions issue on the ovirt node where the VM is running.  It 
looks like it can't read the temporary ceph config file that is sent 
over from the engine:



Are you using octopus?  If so, the config file that's generated is 
missing the "[global]" at the top and octopus doesn't like that.  It's 
been patched upstream.


Yes, I am using Octopus (15.2.4).  Do you have a pointer to the upstream 
patch or issue so that I can watch for a release with the fix?


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LBG4EEWFWDLSBTBLYD6NTBQWTBJRPQDK/


[ovirt-users] Re: How can I Disable console Single Sign On by default

2020-10-15 Thread Riccardo Brunetti
Hi.
Forgot to mention that I'm running oVirt 4.4.2.6-1.el8
Cheers
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M5JRCEU5BJ2VWTYLOAVN72MUYIXCHT7E/


[ovirt-users] Enable Gluster Service

2020-10-15 Thread suporte
Hello, 

When I Enable Gluster Service on the cluster, de data center goes to invalid 
state. 

Any idea why? 


-- 

Jose Ferradeira 
http://www.logicworks.pt 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBFEITWUQ5Q6G4KRDEBVDQIBK4TRFNQL/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-15 Thread Michael Thomas

On 10/15/20 11:27 AM, Jeff Bailey wrote:


On 10/15/2020 12:07 PM, Michael Thomas wrote:

On 10/15/20 10:19 AM, Jeff Bailey wrote:


On 10/15/2020 10:01 AM, Michael Thomas wrote:

Getting closer...

I recreated the storage domain and added rbd_default_features=3 to 
ceph.conf.  Now I see the new disk being created with (what I think 
is) the correct set of features:


# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
    size 100 GiB in 25600 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 70aab541cb331
    block_name_prefix: rbd_data.70aab541cb331
    format: 2
    features: layering
    op_features:
    flags:
    create_timestamp: Thu Oct 15 06:53:23 2020
    access_timestamp: Thu Oct 15 06:53:23 2020
    modify_timestamp: Thu Oct 15 06:53:23 2020

However, I'm still unable to attach the disk to a VM.  This time 
it's a permissions issue on the ovirt node where the VM is running. 
It looks like it can't read the temporary ceph config file that is 
sent over from the engine:



Are you using octopus?  If so, the config file that's generated is 
missing the "[global]" at the top and octopus doesn't like that. It's 
been patched upstream.


Yes, I am using Octopus (15.2.4).  Do you have a pointer to the 
upstream patch or issue so that I can watch for a release with the fix?



https://bugs.launchpad.net/cinder/+bug/1865754


And for anyone playing along at home, I was able to map this back to the 
openstack ticket:


https://review.opendev.org/#/c/730376/

It's a simple fix.  I just changed line 100 of 
/usr/lib/python3.6/site-packages/os_brick/initiator/connectors/rbd.py to:


conf_file.writelines(["[global]", "\n", mon_hosts, "\n", keyring, "\n"])


After applying this patch, I was finally able to attach my ceph block 
device to a running VM.  I've now got virtually unlimited data storage 
for my VMs.  Many thanks to you and Benny for the help!


--Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UET4Q7BDRBWPWSQ4FNZY5XW6S4LJV4KK/