[ovirt-users] Hugepages and running out of memory

2020-02-21 Thread klaasdemter

Hi,

this e-mail is meant to caution people using hugepages and mixing those 
VMs with non-hugepages VMs on the same hypervisors. I ran into major 
trouble, the hypervisors ran out of memory because the VM scheduling 
disregards the hugepages in it's calculations. So if you have hugepages 
and non-hugepages VMs better check the memory commited on a hypervisor 
manually :)



https://bugzilla.redhat.com/show_bug.cgi?id=1804037

https://bugzilla.redhat.com/show_bug.cgi?id=1804046


As for workarounds: so far it seems the only viable solution is 
splitting hugepages/nonhugepages VMs with affinity groups but at least 
for me that means wasting a lot of resources.



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIYCVH33QR6SK5AS3QWF3DRBLJK3HQFK/


[ovirt-users] Re: oVirt behavior with thin provision/deduplicated block storage

2020-02-21 Thread Nir Soffer
On Fri, Feb 21, 2020, 17:14 Alan G  wrote:

> Hi,
>
> I have an oVirt cluster with a storage domain hosted on a FC storage array
> that utilises block de-duplication technology. oVirt reports the capacity
> of the domain as though the de-duplication factor was 1:1, which of course
> is not the case. So what I would like to understand is the likely behavior
> of oVirt when the used space approaches the reported capacity. Particularly
> around the critical action space blocker.
>

oVirt does not know about the underlying block storage thin provisioning
implemention so it cannot help with this.

You will have to use the underlying storage separately to learn about the
actual allocation.

This is unlikely to change for legacy storage, but for Managed Block
Storage (conderlib) we may have a way to access such info.

Gorka, do we have any support in cinderlib for getting info about storage
alllocation and deduplication?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BKCQYELRW5XP5BVDPSJJ76YZD3N37LVF/


[ovirt-users] Re: main channel: failed to connect HTTP proxy connection not allowed

2020-02-21 Thread Jorick Astrego
Duh, too long a day.

I disabled the spice proxy in the console options.

Still doesn't explain why it also fails when using the userportal console.

Regards,

Jorick Astrego


On 2/21/20 4:33 PM, Jorick Astrego wrote:
>
> Hi,
>
> Having a spice console issue on our new 4.3.8 cluster. The console
> stays blank and with the cli and debug on, I get the following error
> "main channel: failed to connect HTTP proxy connection not allowed"
>
> We have a userportal through haproxy so this is configured in ovirt engine
>
> engine-config -g SpiceProxyDefault
> SpiceProxyDefault: http://userportal.*.*:/ version: general
>
> But the same setting works fine on our 4.2 cluster.
>
> I can telnet to the ports on the 4.3.8 hosts without issue.
>
> Non working console on newer cluster:
>
> remote-viewer -v --debug console.vv
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.697: Opening
> display to console.vv
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.700: Guest
> (null) has a spice display
> Guest (null) has a spice display
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.789: Spice
> foreign menu updated
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.790: After open
> connection callback fd=-1
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.790: Opening
> connection to display at console.vv
> Opening connection to display at console.vv
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.791: fullscreen
> display 0: 0
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.791: app is not
> in full screen
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.792: New spice
> channel 0x55704bf70d40 SpiceMainChannel 0
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.793: notebook
> show status 0x55704b820280
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: main
> channel: failed to connect HTTP proxy connection not allowed
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: Destroy
> SPICE channel SpiceMainChannel 0
> (remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: zap main
> channel
>
> (remote-viewer:29267): virt-viewer-WARNING **: 16:24:24.969:
> Channel error: HTTP proxy connection not allowed
>
> Working console on older cluster:
>
> remote-viewer -v --debug console.vv
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.111: Opening
> display to console.vv
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.111: Guest
> (null) has a spice display
> Guest (null) has a spice display
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: Spice
> foreign menu updated
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: After open
> connection callback fd=-1
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: Opening
> connection to display at console.vv
> Opening connection to display at console.vv
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.211: fullscreen
> display 0: 0
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.211: app is not
> in full screen
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.212: New spice
> channel 0x556a03f7e820 SpiceMainChannel 0
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.212: notebook
> show status 0x556a03c7a280
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.397: main
> channel: opened
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.397: notebook
> show status 0x556a03c7a280
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.404:
> virt_viewer_app_set_uuid_string: UUID changed to
> a2d64ffe-7583-45b6-92f3-87661039388c
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.404: app is not
> in full screen
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.502: app is not
> in full screen
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.514: New spice
> channel 0x556a043bb660 SpiceDisplayChannel 0
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.514: New spice
> channel 0x556a0422b570 SpiceCursorChannel 0
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.515: New spice
> channel 0x556a0422c000 SpiceInputsChannel 0
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.515: new inputs
> channel
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.661: creating
> spice display (#:0)
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.661: Insert
> display 0 0x556a03ca29e0
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: creating
> spice display (#:1)
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: Insert
> display 1 0x556a03ca2830
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: creating
> spice display (#:2)
> (remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.700: Insert
> display 2 0x556a03ca2680
> 

[ovirt-users] Re: [ANN] oVirt 4.3.9 Second Release Candidate is now available for testing

2020-02-21 Thread Sandro Bonazzola
Il giorno ven 21 feb 2020 alle ore 08:23 Paolo Margara <
paolo.marg...@polito.it> ha scritto:

> Hi Lev,
>
> when is planned the final release?
>

No exact date set, hopefully in a couple of weeks.
You can get an hint about release readiness by looking at the bugzilla
backlog:
https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aovirt-4.3.9%20status%3Anew%2Cassigned%2Cpost%20-keyword%3Adocumentation


>
> Greetings,
>
> Paolo
> Il 20/02/20 18:39, Lev Veyde ha scritto:
>
> The oVirt Project is pleased to announce the availability of the oVirt
> 4.3.9 Second Release Candidate for testing, as of February 20th, 2020.
>
> This update is a release candidate of the ninth in a series of
> stabilization updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used in
> production.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
> * oVirt Node 4.3 (available for x86_64 only) has been built consuming
> CentOS 7.7 Release
>
> See the release notes [1] for known issues, new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is already available
>
> Additional Resources:
> * Read more about the oVirt 4.3.9 release highlights:
> http://www.ovirt.org/release/4.3.9/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.9/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> --
>
> Lev Veyde
>
> Senior Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTDIH744COLJ2LHXCTCZUAT4QGMUSFG6/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAY4MHQXNFTHHBOHREGZF2BH2UOLP3LW/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3P55X7D35ETGPSQ5SPXOYM2SZOLIOGFB/


[ovirt-users] main channel: failed to connect HTTP proxy connection not allowed

2020-02-21 Thread Jorick Astrego
Hi,

Having a spice console issue on our new 4.3.8 cluster. The console stays
blank and with the cli and debug on, I get the following error "main
channel: failed to connect HTTP proxy connection not allowed"

We have a userportal through haproxy so this is configured in ovirt engine

engine-config -g SpiceProxyDefault
SpiceProxyDefault: http://userportal.*.*:/ version: general

But the same setting works fine on our 4.2 cluster.

I can telnet to the ports on the 4.3.8 hosts without issue.

Non working console on newer cluster:

remote-viewer -v --debug console.vv
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.697: Opening
display to console.vv
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.700: Guest (null)
has a spice display
Guest (null) has a spice display
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.789: Spice
foreign menu updated
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.790: After open
connection callback fd=-1
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.790: Opening
connection to display at console.vv
Opening connection to display at console.vv
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.791: fullscreen
display 0: 0
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.791: app is not
in full screen
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.792: New spice
channel 0x55704bf70d40 SpiceMainChannel 0
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.793: notebook
show status 0x55704b820280
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: main
channel: failed to connect HTTP proxy connection not allowed
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: Destroy
SPICE channel SpiceMainChannel 0
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: zap main channel

(remote-viewer:29267): virt-viewer-WARNING **: 16:24:24.969: Channel
error: HTTP proxy connection not allowed

Working console on older cluster:

remote-viewer -v --debug console.vv
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.111: Opening
display to console.vv
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.111: Guest (null)
has a spice display
Guest (null) has a spice display
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: Spice
foreign menu updated
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: After open
connection callback fd=-1
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: Opening
connection to display at console.vv
Opening connection to display at console.vv
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.211: fullscreen
display 0: 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.211: app is not
in full screen
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.212: New spice
channel 0x556a03f7e820 SpiceMainChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.212: notebook
show status 0x556a03c7a280
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.397: main
channel: opened
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.397: notebook
show status 0x556a03c7a280
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.404:
virt_viewer_app_set_uuid_string: UUID changed to
a2d64ffe-7583-45b6-92f3-87661039388c
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.404: app is not
in full screen
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.502: app is not
in full screen
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.514: New spice
channel 0x556a043bb660 SpiceDisplayChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.514: New spice
channel 0x556a0422b570 SpiceCursorChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.515: New spice
channel 0x556a0422c000 SpiceInputsChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.515: new inputs
channel
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.661: creating
spice display (#:0)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.661: Insert
display 0 0x556a03ca29e0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: creating
spice display (#:1)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: Insert
display 1 0x556a03ca2830
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: creating
spice display (#:2)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.700: Insert
display 2 0x556a03ca2680
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.700: creating
spice display (#:3)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.700: Insert
display 3 0x556a03ca24d0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.701: Found a
window without a display, reusing for display #0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.701: notebook
show display 0x556a03c7a280
(remote-viewer:29334): 

[ovirt-users] Understanding dashboard memory data

2020-02-21 Thread nux

Hello,

I'm having to deal with an oVirt installation, once I log in the 
dashboard says:


Memory: "1.2 Available of 4.5 TiB" and "Virtual resources - Committed: 
113%, Allocated: 114%".


So, clearly there is RAM available, but what's with the committed and 
allocated numbers?


Regards
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NBLQFV3WBBU5WTEGR3A5TW6SERDHIMPD/


[ovirt-users] oVirt behavior with thin provision/deduplicated block storage

2020-02-21 Thread Alan G
Hi,



I have an oVirt cluster with a storage domain hosted on a FC storage array that 
utilises block de-duplication technology. oVirt reports the capacity of the 
domain as though the de-duplication factor was 1:1, which of course is not the 
case. So what I would like to understand is the likely behavior of oVirt when 
the used space approaches the reported capacity. Particularly around the 
critical action space blocker.



Thanks,



Alan___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AZNH2TA7QCGMUGNSBFR4HZC7HWYE67PQ/


[ovirt-users] Re: terraform integration

2020-02-21 Thread Nathanaël Blanchet

Hello,

It seems that the work for including ovirt as a provider in the master 
branch of openshift installer has been done. I compiled the master code 
and ovirt does appear in the survey.
I don't have much time to test it for now but is it operationnal? If 
yes, I will prior to have a look to it.


Thanks.

Le 06/01/2020 à 21:30, Roy Golan a écrit :



The merge window is now open for the masters branches of the various 
origin components.
Post merge there should be an OKD release - this is not under my 
control, but when it will be available I'll let you know.


On Mon, 6 Jan 2020 at 20:54, Nathanaël Blanchet > wrote:


Hello Roy

Le 21/11/2019 à 13:57, Roy Golan a écrit :



On Thu, 21 Nov 2019 at 08:48, Roy Golan mailto:rgo...@redhat.com>> wrote:



On Wed, 20 Nov 2019 at 09:49, Nathanaël Blanchet
mailto:blanc...@abes.fr>> wrote:


Le 19/11/2019 à 19:23, Nathanaël Blanchet a écrit :



Le 19/11/2019 à 13:43, Roy Golan a écrit :



On Tue, 19 Nov 2019 at 14:34, Nathanaël Blanchet
mailto:blanc...@abes.fr>> wrote:

Le 19/11/2019 à 08:55, Roy Golan a écrit :

oc get -o json clusterversion


This is the output of the previous failed
deployment, I'll give a try to a newer one when
I'll have a minute to test


Without changing nothing with template,  I gave a new
try and... nothing works anymore now, none of provided
IPs can be pingued : dial tcp 10.34.212.51:6443
: connect: no route to host",
so none of masters can be provisonned by bootstrap.

I tried with the latest rhcos and latest ovirt 4.3.7, it
is the same. Obviously something changed since my first
attempt 12 days ago... is your docker image for
openshift-installer up to date?

Are you still able to your side to deploy a valid cluster ?


I investigated looking at bootstrap logs (attached) and
it seems that every containers die immediately after been
started.

Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
07:02:33.60107571 + UTC m=+0.794838407 container init
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
07:02:33.623197173 + UTC m=+0.816959853 container
start
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
07:02:33.623814258 + UTC m=+0.817576965 container
attach
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:34 localhost systemd[1]:

libpod-446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603.scope:
Consumed 814ms CPU time
Nov 20 07:02:34 localhost podman[2024]: 2019-11-20
07:02:34.100569998 + UTC m=+1.294332779 container
died
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:35 localhost podman[2024]: 2019-11-20
07:02:35.138523102 + UTC m=+2.332285844 container
remove
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)

and this:

Nov 20 07:04:16 localhost hyperkube[1909]: E1120
07:04:16.489527    1909 remote_runtime.go:200]
CreateContainer in sandbox
"58f2062aa7b6a5b2bdd6b9cf7b41a9f94ca2b30ad5a20e4fa4dec8a9b82f05e5"
from runtime service failed: rpc error: code = Unknown
desc = container create failed: container_linux.go:345:
starting container process caused "exec: \"runtimecfg\":
executable file not found in $PATH"
Nov 20 07:04:16 localhost hyperkube[1909]: E1120
07:04:16.489714    1909 kuberuntime_manager.go:783] init