[ovirt-users] Re: hyperconverged single node with SSD cache fails gluster creation

2019-09-04 Thread Sachidananda URS
On Wed, Sep 4, 2019 at 9:27 PM  wrote:

> I am seeing more success than failures at creating single and triple node
> hyperconverged setups after some weeks of experimentation so I am branching
> out to additional features: In this case the ability to use SSDs as cache
> media for hard disks.
>
> I tried first with a single node that combined caching and compression and
> that fails during the creation of LVMs.
>
> I tried again without the VDO compression, but actually the results where
> identical whilst VDO compression but without the LV cache worked ok.
>
> I tried various combinations, using less space etc., but the results are
> always the same and unfortunately rather cryptic (substituted the physical
> disk label with {disklabel}):
>
> TASK [gluster.infra/roles/backend_setup : Extend volume group]
> *
> failed: [{hostname}] (item={u'vgname': u'gluster_vg_{disklabel}p1',
> u'cachethinpoolname': u'gluster_thinpool_gluster_vg_{disklabel}p1',
> u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_{disklabel}p1',
> u'cachedisk': u'/dev/sda4', u'cachemetalvname':
> u'cache_gluster_thinpool_gluster_vg_{disklabel}p1', u'cachemode':
> u'writeback', u'cachemetalvsize': u'70G', u'cachelvsize': u'630G'}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume
> \"/dev/mapper/vdo_{disklabel}p1\" still in use\n", "item": {"cachedisk":
> "/dev/sda4", "cachelvname":
> "cachelv_gluster_thinpool_gluster_vg_{disklabel}p1", "cachelvsize": "630G",
> "cachemetalvname": "cache_gluster_thinpool_gluster_vg_{disklabel}p1",
> "cachemetalvsize": "70G", "cachemode": "writeback", "cachethinpoolname":
> "gluster_thinpool_gluster_vg_{disklabel}p1", "vgname":
> "gluster_vg_{disklabel}p1"}, "msg": "Unable to reduce
> gluster_vg_{disklabel}p1 by /dev/dm-15.", "rc": 5}
>
> somewhere within that I see something that points to a race condition
> ("still in use").
>
> Unfortunately I have not been able to pinpoint the raw logs which are used
> at that stage and I wasn't able to obtain more info.
>
> At this point quite a bit of storage setup is already done, so rolling
> back for a clean new attempt, can be a bit complicated, with reboots to
> reconcile the kernel with data on disk.
>
> I don't actually believe it's related to single node and I'd be quite
> happy to move the creation of the SSD cache to a later stage, but in a VDO
> setup, this looks slightly complex to someone without intimate knowledge of
> LVS-with-cache-and-perhaps-thin/VDO/Gluster all thrown into one.
>
> Needless the feature set (SSD caching & compressed-dedup) sounds terribly
> attractive but when things don't just work, it's more terrifying.
>

Hi Thomas,

The way we have to write the variables for 2.8 while setting up cache.
Currently we are writing something like this:

gluster_infra_cache_vars:
- vgname: vg_sdb2
  cachedisk: /dev/sdb3
  cachelvname: cachelv_thinpool_vg_sdb2
  cachethinpoolname: thinpool_vg_sdb2
  cachelvsize: '10G'
  cachemetalvsize: '2G'
  cachemetalvname: cache_thinpool_vg_sdb2
  cachemode: writethrough
===
Not that cachedisk is provided as /dev/sdb3 which would be extended with vg
vg_sdb2 ... this works well
The module will take care of extending the vg with /dev/sdb3.

*However with Ansible-2.8 we cannot provide like this but have to be more
explicit. And have to mention the pv underlying*
*this volume group vg_sdb2. So, with respect to 2.8 we have to write that
variable like:*

>>>
  gluster_infra_cache_vars:
- vgname: vg_sdb2
  cachedisk: '/dev/sdb2,/dev/sdb3'
  cachelvname: cachelv_thinpool_vg_sdb2
  cachethinpoolname: thinpool_vg_sdb2
  cachelvsize: '10G'
  cachemetalvsize: '2G'
  cachemetalvname: cache_thinpool_vg_sdb2
  cachemode: writethrough
=

Note that I have mentioned both /dev/sdb2 and /dev/sdb3.
This change is backward compatible, that is it works with 2.7 as well. I
have raised an issue with Ansible as well.
Which can be found here: https://github.com/ansible/ansible/issues/56501

However, @olafbuitelaar has fixed this in gluster-ansible-infra, and the
patch is merged in master.
If you can checkout master branch, you should be fine.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WU23D3OS4TLTX3R4FYRJC4NA6HRGF4C7/


[ovirt-users] Re: Any real experiences with nested virtualisation?

2019-09-04 Thread Strahil
I guess  you want to do something like this one : 
http://www.chrisj.cloud/?q=node/8

As Ovirt is upstream of RHV , I believe that RDO and Ovirt have almost the same 
integration as in the post.

Still,such setup requires a lot of knowledge about KVM, oVirt and OpenStack.

Best Regards,
Strahil NikolovOn Sep 4, 2019 21:36, tho...@hoberg.net wrote:
>
> First, it's a scenario the oVirt team seem to actually embrace themselves. 
>
> I believe I have seen that type of deployment mentioned either in one of the 
> forward looking blog posts or Red Hat summit videos. 
>
> And then it simply makes a lot of sense to have those crucial management VMs 
> that control the swarm of resources in a cloud run under a fault-tolerant VM 
> management environment such as oVirt. 
>
> As for nested virtualization, I can only provide some negative experience 
> with oVirt. 
>
> I've done nested virtualization with VMware, running ESX on VMWare 
> Workstation and then some VMs underneath that ESX successfully. 
>
> When I tried to do the same with oVirt, running an oVirt hyperconverged 
> cluster as VMs on VMware workstation, I got reasonably far, but then every VM 
> I tried to launch nested would just stop right at boot. 
>
> It first happened with the hosted-engine, as that tries to start as a VM, but 
> it happened more obviously when I tried to migrate VMs (just a plain Fedora 
> 30 in this case) from a physical compute node running oVirt nodeOS to a 
> virtualized compute node, running the same current oVirt nodeOS image on 
> VMware Workstation (Windows 2016 host, if that matters). 
>
> The machine would start the live-migration properly and then just freeze and 
> stay frozen until I migrated it back or indeed to any other non-virtualized 
> node. 
>
> I haven't as yet tried to do that with nested KVM, which I believe might 
> actually work: But it's on my things to test, so I'll report back, once I get 
> around it. 
>
> Actually I am wondering, a) how deep this nesting would be allowed to go and 
> b) if the 'leaf' nodes at the last nesting level should actually have nesting 
> support activated in their kernel parameters... 
>
> Any insight here would be well appreciated.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YDBQGIXUWTDBSHHGY6GXSIPKYA4JTS2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QOYADG2X3YEA5DA7YWB7QLT2IOEC65F3/


[ovirt-users] Re: Any real experiences with nested virtualisation?

2019-09-04 Thread thomas
First, it's a scenario the oVirt team seem to actually embrace themselves.

I believe I have seen that type of deployment mentioned either in one of the 
forward looking blog posts or Red Hat summit videos.

And then it simply makes a lot of sense to have those crucial management VMs 
that control the swarm of resources in a cloud run under a fault-tolerant VM 
management environment such as oVirt.

As for nested virtualization, I can only provide some negative experience with 
oVirt.

I've done nested virtualization with VMware, running ESX on VMWare Workstation 
and then some VMs underneath that ESX successfully.

When I tried to do the same with oVirt, running an oVirt hyperconverged cluster 
as VMs on VMware workstation, I got reasonably far, but then every VM I tried 
to launch nested would just stop right at boot.

It first happened with the hosted-engine, as that tries to start as a VM, but 
it happened more obviously when I tried to migrate VMs (just a plain Fedora 30 
in this case) from a physical compute node running oVirt nodeOS to a 
virtualized compute node, running the same current oVirt nodeOS image on VMware 
Workstation (Windows 2016 host, if that matters).

The machine would start the live-migration properly and then just freeze and 
stay frozen until I migrated it back or indeed to any other non-virtualized 
node.

I haven't as yet tried to do that with nested KVM, which I believe might 
actually work: But it's on my things to test, so I'll report back, once I get 
around it.

Actually I am wondering, a) how deep this nesting would be allowed to go and b) 
if the 'leaf' nodes at the last nesting level should actually have nesting 
support activated in their kernel parameters...

Any insight here would be well appreciated.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YDBQGIXUWTDBSHHGY6GXSIPKYA4JTS2/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Dionysis K
Now i noticed another strange i  deactivated the memory balloon option from
the VM hosted engine and tried to update the values again with the same
memory it still keeps the value of memory size at 5120 while the value of
physical memory guaranteed it changed to what we desire at the 8192 value
and then the error appears again in the vm engine

i ckecked the engine log file it shows the same errors!

is there are other logfile locations that can give us any clues ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFJAB6TY5CIY4B222JNIEZBHZPJMX7KA/


[ovirt-users] Re: Procedure to replace out out of three hyperconverged nodes

2019-09-04 Thread thomas
I was eventually able to solve that problem by using net.ifnames=0 on the 
kernel boot command line during the installation of CentOS or the oVirt Node 
image, to avoid injecting non-portable machine specific network names and 
configurations in the overlay network.

Unfortunately this won't work on more production type environments with bonded 
pairs of purpose segregated NICs.

But even then I'd go as far as recommending going with ethX and hand-tuning the 
/etc/sysconfig/network-scripts in case of potential hardware replacements, 
rather than trying to correct the wiring in the overlay network while a gluster 
node is down. At least with the current documentation.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DHYENPBJCZ4H3T4HUYUB2QSCPSIGRUNG/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Dionysis K
Yes i have tried to to that as well update them both at the 8192 MB value it 
still reverts back to 5120 and transforms the guaranteed memory to 5461 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4HJO7L7SDXUMY67DRSUPLJVFJ66LB4A/


[ovirt-users] Re: Health Warning: Cinnamon toxic when adding EL7 nodes as oVirt hosts

2019-09-04 Thread thomas
> On Mon, Aug 26, 2019 at 2:13 PM  
> rpmUtils is part of yum itself, in the yum libraries. miniyum is part of 
> otopi.
> 
> 
> It was:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1724056
> 
> But only for 4.4 (the next version).
> 
Otopi seems to generate a tarball on the host that runs the wizard from, which 
is then forwarded to the hosted-engine VM for execution. In that context a 
version comparison function (something like "CompareEVR") fails to execute, 
because that requires rpmUtils to be loaded. To my shame I am not a Python 
programmer, but it seems to be late-binding, which may be why it's not 
universally failing at the import of the rpmUtils, but only when Compare..EVR 
is called.

Tracing what's going on in the dynamic context of a Pyhon program executing 
from an tarball, that's just been sent over an SSH wire to a VM that lies in 
agony... is a bit steep, especially because it takes an hour to recreate the 
circumstances.

Perhaps I'd volunteer to try 4.4 early instead ;-)
> 
> If you want a minimal installation, that already includes exactly what's
> needed, you can also use ovirt-node :-)

The oVirtNodes work just fine and that's what I fell back on early, when I 
wasn't sure about the cause.

But one usage scenario involves big machine learning compute hosts running 
Nvidia workloads on V100 GPUs in Docker containers (or basically bare metal), 
while some support workloads would be managed in an oVirt corner on those same 
hosts.

It basically anticipates where oVirt wants to go, mixing scale-in and scale-out 
workloads under one management framework.

It's also necessary, as many current ML capable GPUs aren't actually supported 
inside virtual machines for market segmentation reasons.
> 
> If you still want to, you can open a bug, and attach all relevant
> logs, including yum log and something like:
> 
> lastid=$(yum history | awk '/|/ && $1 ~ /[0-9]/ {print $1; exit}')
> for id in $(seq $lastid); do
> echo === ID $id ===
> yum history info $id
> done
> 
> 
> Very well, you can still use some file upload service. I never used
> the web interface for the users@ mailing list, no idea how it looks
> like.
I believe the bug tracker has a native attachment facility: I was just slightly 
confused that I might see a different UI than everybody else.
> 
> Best regards,
Likewise and thanks for your encouraging help so far!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHVLEJZEMERDJHXGDMAXRIPWABJS22AS/


[ovirt-users] Re: [ANN] oVirt 4.3.6 Fourth Release Candidate is now available for testing

2019-09-04 Thread Strahil
Hi Sandro,

Can the update work with 7.6  ?
I'm asking because CentOS 7.7  is still building and unavailable.

Best Regards,
Strahil NikolovOn Aug 29, 2019 10:47, Sandro Bonazzola  
wrote:
>
> The oVirt Project is pleased to announce the availability of the oVirt 4.3.6 
> Fourth Release Candidate for testing, as of August 29th, 2019.
>
> This update is a release candidate of the sixth in a series of stabilization 
> updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used in 
> production.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures 
> for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
> * oVirt Node 4.3 (available for x86_64 only) will be made available when 
> CentOS 7.7 will be released.
>
> See the release notes [1] for known issues, new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is not yet available, pending CentOS 7.7 release to be available
>
> Additional Resources:
> * Read more about the oVirt 4.3.6 release highlights: 
> http://www.ovirt.org/release/4.3.6/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt 
> blog:http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.6/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> -- 
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbona...@redhat.com   
>
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OI5W4T23KWESJLUALXAG25WN6X57AV65/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Simone Tiraboschi
On Wed, Sep 4, 2019 at 6:15 PM Dionysis K  wrote:

> if you see the engine zip log then
>
> i need to add that whatever value i am entering at the memory size field
> does not update it it still leaves it back to 5120 for some reason
>

And indeed in the logs I see that:
2019-09-04 18:04:56,535+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-8) [564cc40c] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory:
changed the amount of memory on VM HostedEngine from 5120 to 5120
2019-09-04 18:04:56,585+03 INFO
 [org.ovirt.engine.core.bll.HotSetAmountOfMemoryCommand] (default task-8)
[6426f726] Running command: HotSetAmountOfMemoryCommand internal: true.
Entities affected :  ID: f058c188-43c5-4685-88e0-c88b3c9abd01 Type:
VMAction group EDIT_VM_PROPERTIES with role type USER
2019-09-04 18:04:56,587+03 INFO
 [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default
task-8) [6426f726] START, SetAmountOfMemoryVDSCommand(HostName = Deimos,
Params:{hostId='0fafebe7-14e8-4e4f-916c-d56b7b5150f8',
vmId='f058c188-43c5-4685-88e0-c88b3c9abd01',
memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='0c7de7ff-e68e-42d1-a807-4e3ee92201e8',
vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory',
type='MEMORY', specParams='[node=0, size=2944]', address='',
managed='true', plugged='true', readOnly='false',
deviceAlias='ua-0c7de7ff-e68e-42d1-a807-4e3ee92201e8',
customProperties='null', snapshotId='null', logicalName='null',
hostDevice='null'}', minAllocatedMem='5461'}), log id: 4d62125f
2019-09-04 18:04:56,641+03 INFO
 [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default
task-8) [6426f726] FINISH, SetAmountOfMemoryVDSCommand, return: , log id:
4d62125f
2019-09-04 18:04:56,669+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-8) [6426f726] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory:
changed the amount of memory on VM HostedEngine from 5120 to 5120

On the other side I also see:
2019-09-04 18:04:17,549+03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-66) [] EVENT_ID:
VM_MEMORY_UNDER_GUARANTEED_VALUE(148), VM HostedEngine on host Deimos was
guaranteed 5461 MB but currently has 5120 MB

Can you please try updating also the guaranteed memory value in the UI?



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WH253DT22NZL43KJAGMTDOUUCOGSSXEG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N42BQZMDOHIR425NNWNYVVQLPJIEMS5Y/


[ovirt-users] Re: ovirt-engine-extension-aaa-ldap-setup

2019-09-04 Thread Rick A
thanks for the reply.  That doesn't seem to work for me either.  Strange part 
is if apply the settings anyway and I use a wildcard "*" in ovirt when 
searching for users, it lists users in a specific OU only even though it's set 
to search DC=domain,DC=com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFWJ4MGBF2RRIINHLG7LYCLJ5XACRVFE/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Dionysis K
if you see the engine zip log then  

i need to add that whatever value i am entering at the memory size field does 
not update it it still leaves it back to 5120 for some reason 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WH253DT22NZL43KJAGMTDOUUCOGSSXEG/


[ovirt-users] hyperconverged single node with SSD cache fails gluster creation

2019-09-04 Thread thomas
I am seeing more success than failures at creating single and triple node 
hyperconverged setups after some weeks of experimentation so I am branching out 
to additional features: In this case the ability to use SSDs as cache media for 
hard disks.

I tried first with a single node that combined caching and compression and that 
fails during the creation of LVMs.

I tried again without the VDO compression, but actually the results where 
identical whilst VDO compression but without the LV cache worked ok.

I tried various combinations, using less space etc., but the results are always 
the same and unfortunately rather cryptic (substituted the physical disk label 
with {disklabel}):

TASK [gluster.infra/roles/backend_setup : Extend volume group] *
failed: [{hostname}] (item={u'vgname': u'gluster_vg_{disklabel}p1', 
u'cachethinpoolname': u'gluster_thinpool_gluster_vg_{disklabel}p1', 
u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_{disklabel}p1', 
u'cachedisk': u'/dev/sda4', u'cachemetalvname': 
u'cache_gluster_thinpool_gluster_vg_{disklabel}p1', u'cachemode': u'writeback', 
u'cachemetalvsize': u'70G', u'cachelvsize': u'630G'}) => {"ansible_loop_var": 
"item", "changed": false, "err": "  Physical volume 
\"/dev/mapper/vdo_{disklabel}p1\" still in use\n", "item": {"cachedisk": 
"/dev/sda4", "cachelvname": 
"cachelv_gluster_thinpool_gluster_vg_{disklabel}p1", "cachelvsize": "630G", 
"cachemetalvname": "cache_gluster_thinpool_gluster_vg_{disklabel}p1", 
"cachemetalvsize": "70G", "cachemode": "writeback", "cachethinpoolname": 
"gluster_thinpool_gluster_vg_{disklabel}p1", "vgname": 
"gluster_vg_{disklabel}p1"}, "msg": "Unable to reduce gluster_vg_{disklabel}p1 
by /dev/dm-15.", "rc": 5}

somewhere within that I see something that points to a race condition ("still 
in use").

Unfortunately I have not been able to pinpoint the raw logs which are used at 
that stage and I wasn't able to obtain more info.

At this point quite a bit of storage setup is already done, so rolling back for 
a clean new attempt, can be a bit complicated, with reboots to reconcile the 
kernel with data on disk.

I don't actually believe it's related to single node and I'd be quite happy to 
move the creation of the SSD cache to a later stage, but in a VDO setup, this 
looks slightly complex to someone without intimate knowledge of 
LVS-with-cache-and-perhaps-thin/VDO/Gluster all thrown into one.

Needless the feature set (SSD caching & compressed-dedup) sounds terribly 
attractive but when things don't just work, it's more terrifying.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PFCXSQR5MUFE2U2PPETHBSFQZPA75JVE/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Simone Tiraboschi
On Wed, Sep 4, 2019 at 4:30 PM Dionysis K  wrote:

> Hello I am having the same problem i cannot update the memory engine
> configuration
>
> i even updated the engine from 4.2.8 to 4.3.5 and the problem is still
> persisting!
>
> how we can find what its  going on ?
>

Hi,
can you please attach engine.log ?


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDG22BXGTD2EPHVLTH7XNBBSTYRB7IML/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4VGETHV2ZZKUGSPOYBBHXZURUGLOUHK/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Dionysis K
Hello I am having the same problem i cannot update the memory engine 
configuration 

i even updated the engine from 4.2.8 to 4.3.5 and the problem is still 
persisting!

how we can find what its  going on ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDG22BXGTD2EPHVLTH7XNBBSTYRB7IML/


[ovirt-users] Re: iSCSI Multipath/multiple gateways

2019-09-04 Thread Dan Poltawski
On Tue, 2019-09-03 at 17:27 +0200, Matthias Leopold wrote:
> - multipath configuration on hypervisor hosts is then automatically
> set
> up without further intervention. be sure to add the multipath.conf
> snippets from
> https://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/ to your
> hypervisor hosts, so multipathd works correctly (I originally forgot
> to
> do this...)

Thanks for your response - adding those multipath snippets did help me
setting up the multipath when I manually login, but as far as I can
tell I have missed something with the login for the hosted engine
deployment, such that on a reboot of a hypervisor it doesn't seem to
discover the aditional hosts unless I do a manual login with:

 iscsiadm -m node -T  -l

I'm wonder if I need something additional in /etc/ovirt-hosted-
engine/hosted-engine.conf to trigger this.

cheers,

Dan



The Networking People (TNP) Limited. Registered office: Network House, Caton 
Rd, Lancaster, LA1 3PE. Registered in England & Wales with company number: 
07667393

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QISPU7UTJWA6DJI52UCYOOPMGRI7LKDI/