[ovirt-users] Re: Non storage nodes erronously included in quota calculations for HCI?

2020-05-19 Thread Strahil Nikolov via Users
On May 20, 2020 2:37:32 AM GMT+03:00, tho...@hoberg.net wrote:
>For my home-lab I operate a 3 node HCI cluster on 100% passive Atoms,
>mostly to run light infrastructure services such as LDAP and NextCloud.
>
>I then add workstations or even laptops as pure compute hosts to the
>cluster for bigger but temporary things, that might actually run a
>different OS most of the time or just be shut off. From oVirt's point
>of view, these are just first put into maintenance and then shut down
>until needed again. No fencing or power management, all manual.
>
>All nodes, even the HCI ones, run CentOS7 with more of a workstation
>configuration, so updates pile up pretty quickly.
>
>After I recently upgraded one of these extra compute nodes, I found my
>three node HCI cluster not just faltering, but indeed very hard to
>reactivate at all.
>
>The faltering is a distinct issue: I have the impression that reboots
>of oVirt nodes cause broadcast storms on my rather simplistic 10Gibt L2
>switch, which a normal CentOS instance (or any other OS)  doesn't, but
>that's for another post.
>
>No what struck me, was that the gluster daemons on the three HCI nodes
>kept complaining about a lack of quorum long after the network was all
>back to normal, even if all three of them were there, saw each other
>perfectly on "gluster show status all", ready and without any healing
>issues pending at all.
>Glusterd would complain on all three nodes that there was no quota for
>the bricks and stop them.
>
>That went away as soon as I started one additional compute node, a node
>that was a gluster peer (because an oVirt host added to a HCI cluster
>always gets put into the Gluster, even if it's not contributing
>storage) but had no bricks. Immediately the gluster daemon on the three
>nodes with contributing bricks would report back good quota and launch
>the volumes (and thus all the rest of oVirt), even if in terms of
>*storage bricks* nothing had changed.
>
>I am afraid that downing the extra compute-only oVirtNode will bring
>down the HCI: Clearly not the type of redundancy it's designed to
>deliver.
>
>Evidently such compute-only hosts (and gluster members) get included
>into some quorum deliberations even if they hold not a single brick,
>neither storage nor arbitration.
>
>To me that seems like a bug, if that is indeed what happens: There I
>need your advice and suggestions.
>
>AFAIK HCI is a late addition to oVirt/RHEV as storage and compute were
>orginally designed to be completely distinct. In fact there are still
>remnants of documentation which seem to prohibit using a node for both
>compute and storage... what HCI is all about.
>
>And I have seen compute nodes with "matching" storage (parts of a
>distinct HCI setup, that was taken down but still had all the storage
>and Gluster elements operable), being happliy absorbed into a HCI
>cluster with all Gluster storage appearing in the GUI etc., without any
>manual creation or inclusion of bricks: Fully automatic (and
>undocumented)!
>
>In that case it makes sense to widen the scope of quota calculations
>when additional nodes are hyperconverged elements with contributing
>bricks. It also seems the only way to turn a 3 node HCI into 6 or 9
>node one.
>
>But if you really just want to add compute nodes without bricks, those
>can't get "quota votes" without storage to play a role in the
>redundancy.
>
>I can easily imagine the missing "if then else" in the code here, but I
>was actually very surprised to see those failure and success messages
>coming from glusterd itself, which to my understanding is pretty
>unrelated to oVirt on top. Not from the management engine (wasn't
>running anyway), not from VDSM.
>
>Re-creating the scenario is very scary even if I have gone through this
>three times already, trying to just bring my HCI back up. And then
>there is so verbose logs all over the place that I'd like some advice
>which ones I should post.
>
>But simply speaking: Gluster peers should get no quota voting rights on
>volumes unless they contribute bricks. That rule seems broken.
>
>Those in the know, please let me know if am on a goose chase or if
>there is a real issue here that deserves a bug report.
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/IBHZ674FMWFDSZYLTOMVAJU2JNM4K6OL/

I have skipped a huge  part of your e-mail cause it was too long (don't get 
offended).

Can you summarize in one (or 2  ) sentences what exactly is the problem ?
Is the UI not detecting the Gluster status, quota is preventing you to start 
VMs  or something else ?

Best Rergards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscrib

[ovirt-users] Re: Upgrade Memory of oVirt Nodes

2020-05-19 Thread Strahil Nikolov via Users
On May 20, 2020 2:48:24 AM GMT+03:00, tho...@hoberg.net wrote:
>Just like Strahil, I would expect this to work just fine. RAM
>differences are actually the smallest concern, unless you run out of it
>in the mean-time. And there you may want to be careful and perhaps move
>VMs around manually with such a small HCI setup.
>
>oVirt will properly optimize the VMs and the hosts to fit, but I don't
>know what it will do when there simply isn't enough RAM to run the live
>VMs. Under the best of circumstances it should refuse to shut down the
>node you want to upgrade. Under less advantagious circumstances some
>VMs might get paused, or shut down (or killed?). I'd be interested to
>hear your experiences, somewhat less inclinded to try myself ;-)
>
>I'd play it safe and reduce the number of running VMs to relieve the
>RAM pressure. I assume there is a reason you want more RAM but the only
>way to get there is to reduce it first and that doesn't imply
>"unnoticeable".
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFCUPGDOSC3NLHULH6QW6EBCXR5DJIVN/

Actually,
If gou set a host into maintenance and there is no space to move - oVirt fails 
to migrate the VM and the node cannot be put into maintenance.
In such case I usually power off  the non-important VMs and then the node goes 
into the maintenance mode (I guess I was fast enough, as there is some timeout 
setting for going into that mode).

From there, you will be able to power off, upgrade and return the node in the 
cluster.

If a node reaches the 90% of memory usage, the ksm service kicks in and starts 
merging the same memory pages in order to save memory. Of course this is a 
trade of cpu cycles for extra memory - but I haven't seen a hypervisor that is 
running out of cpu (if you host HPC VMs could change that).Don't expect  magic  
from KSM, but in my case (32GB) it gained 5-6 GB from my linux VMs, but this 
depends on the software in the VMs.

Best Rregards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HI4ZYLLVQ6AWMXM42Z7RJZLU6ZTYQDDQ/


[ovirt-users] Re: Upgrade Memory of oVirt Nodes

2020-05-19 Thread thomas
Just like Strahil, I would expect this to work just fine. RAM differences are 
actually the smallest concern, unless you run out of it in the mean-time. And 
there you may want to be careful and perhaps move VMs around manually with such 
a small HCI setup.

oVirt will properly optimize the VMs and the hosts to fit, but I don't know 
what it will do when there simply isn't enough RAM to run the live VMs. Under 
the best of circumstances it should refuse to shut down the node you want to 
upgrade. Under less advantagious circumstances some VMs might get paused, or 
shut down (or killed?). I'd be interested to hear your experiences, somewhat 
less inclinded to try myself ;-)

I'd play it safe and reduce the number of running VMs to relieve the RAM 
pressure. I assume there is a reason you want more RAM but the only way to get 
there is to reduce it first and that doesn't imply "unnoticeable".
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFCUPGDOSC3NLHULH6QW6EBCXR5DJIVN/


[ovirt-users] Non storage nodes erronously included in quota calculations for HCI?

2020-05-19 Thread thomas
For my home-lab I operate a 3 node HCI cluster on 100% passive Atoms, mostly to 
run light infrastructure services such as LDAP and NextCloud.

I then add workstations or even laptops as pure compute hosts to the cluster 
for bigger but temporary things, that might actually run a different OS most of 
the time or just be shut off. From oVirt's point of view, these are just first 
put into maintenance and then shut down until needed again. No fencing or power 
management, all manual.

All nodes, even the HCI ones, run CentOS7 with more of a workstation 
configuration, so updates pile up pretty quickly.

After I recently upgraded one of these extra compute nodes, I found my three 
node HCI cluster not just faltering, but indeed very hard to reactivate at all.

The faltering is a distinct issue: I have the impression that reboots of oVirt 
nodes cause broadcast storms on my rather simplistic 10Gibt L2 switch, which a 
normal CentOS instance (or any other OS)  doesn't, but that's for another post.

No what struck me, was that the gluster daemons on the three HCI nodes kept 
complaining about a lack of quorum long after the network was all back to 
normal, even if all three of them were there, saw each other perfectly on 
"gluster show status all", ready and without any healing issues pending at all.
Glusterd would complain on all three nodes that there was no quota for the 
bricks and stop them.

That went away as soon as I started one additional compute node, a node that 
was a gluster peer (because an oVirt host added to a HCI cluster always gets 
put into the Gluster, even if it's not contributing storage) but had no bricks. 
Immediately the gluster daemon on the three nodes with contributing bricks 
would report back good quota and launch the volumes (and thus all the rest of 
oVirt), even if in terms of *storage bricks* nothing had changed.

I am afraid that downing the extra compute-only oVirtNode will bring down the 
HCI: Clearly not the type of redundancy it's designed to deliver.

Evidently such compute-only hosts (and gluster members) get included into some 
quorum deliberations even if they hold not a single brick, neither storage nor 
arbitration.

To me that seems like a bug, if that is indeed what happens: There I need your 
advice and suggestions.

AFAIK HCI is a late addition to oVirt/RHEV as storage and compute were 
orginally designed to be completely distinct. In fact there are still remnants 
of documentation which seem to prohibit using a node for both compute and 
storage... what HCI is all about.

And I have seen compute nodes with "matching" storage (parts of a distinct HCI 
setup, that was taken down but still had all the storage and Gluster elements 
operable), being happliy absorbed into a HCI cluster with all Gluster storage 
appearing in the GUI etc., without any manual creation or inclusion of bricks: 
Fully automatic (and undocumented)!

In that case it makes sense to widen the scope of quota calculations when 
additional nodes are hyperconverged elements with contributing bricks. It also 
seems the only way to turn a 3 node HCI into 6 or 9 node one.

But if you really just want to add compute nodes without bricks, those can't 
get "quota votes" without storage to play a role in the redundancy.

I can easily imagine the missing "if then else" in the code here, but I was 
actually very surprised to see those failure and success messages coming from 
glusterd itself, which to my understanding is pretty unrelated to oVirt on top. 
Not from the management engine (wasn't running anyway), not from VDSM.

Re-creating the scenario is very scary even if I have gone through this three 
times already, trying to just bring my HCI back up. And then there is so 
verbose logs all over the place that I'd like some advice which ones I should 
post.

But simply speaking: Gluster peers should get no quota voting rights on volumes 
unless they contribute bricks. That rule seems broken.

Those in the know, please let me know if am on a goose chase or if there is a 
real issue here that deserves a bug report.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IBHZ674FMWFDSZYLTOMVAJU2JNM4K6OL/


[ovirt-users] Re: Host "Unassigned Logical Networks" list is wrong

2020-05-19 Thread Dominik Holler
On Tue, May 19, 2020 at 1:25 PM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> I'm having a special issue with assigning logical networks to host
> interfaces via label.
> I'm used to assigning logical networks (tagged VLANs, VM network flag) a
> "Network label", which I can choose from a drop down menu in the UI
> dialog. These networks are automatically flagged "assign" and "require"
> in my cluster and I expect them to be synced to hosts physical interface
> to which I "drag and dropped" the label. This always seemed to work and
> I never worried.
>
> Now I noticed that when I look at the host "Setup Host Networks" dialog
> again, that the last(?) logical networks I provisioned as explained
> above show up as "Unassigned Logical Networks" with a mouse over text of
> "Network should be assigned to 'foo' via label 'bar'. However for some
> reason it isn't". This has to be some presentation error, because the
> networks are in fact assigned which is also visible in the hosts
> "Network Interfaces" tab.
>
> The "Sync All Network" buttons in "Host" - "Network Interfaces" and
> "Cluster" - "Logical Networks" tabs are inactive.
> When hosts are put to maintenance and activated again the error disappears.
>
> My oVirt version is 4.3.7.
> Cluster switch type is "Linux Bridge".
>
> This may seem a minor error, but since it affects my production clusters
> and a couple of VLANs I can't afford to play around with host network
> configuration. Can anybody explain this? Any help would be appreciated.
>
>

Does "Refresh Capabilities" resolve the issue?


> thx
> Matthias
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/75FXB6SP3GSX4NIORCT6OM3HSWP56JPY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/22MSNPB4QRHHIPWMDTRYYGMVV6V7X63H/


[ovirt-users] Re: [ovirt-devel] oVirt and Fedora

2020-05-19 Thread Neal Gompa
On Mon, May 11, 2020 at 11:45 AM Michal Skrivanek
 wrote:
>
>
>
> > On 11 May 2020, at 14:49, Neal Gompa  wrote:
> >
> > On Mon, May 11, 2020 at 8:32 AM Nir Soffer  wrote:
> >>
> >> On Mon, May 11, 2020 at 2:24 PM Neal Gompa  wrote:
> >>>
> >>> As far as the oVirt software keeping up with Fedora, the main problem 
> >>> here has always been that people aren't integrating their software into 
> >>> the distribution itself.
>
> it was never a good fit for oVirt to be part of other distributions. We had 
> individual packages part of Fedora in history, but there are things which are 
> hard to accept (like automatically enabling of installed services, UIDs), and 
> overall it’s just too complex, we’re rather a distribution than a simple app 
> on top of base OS.
>

None of those things are hard to do in Fedora. They're incredibly easy
to do. I know this because I've gone through this process already
before.

But fine, let's assume I consider this argument valid. Then there's
still no reason not to be continually providing support for Fedora as
an add-on, as you have before.

> >>> That's how everything can get tested together. And this comes back to the 
> >>> old bug about fixing vdsm so that it doesn't use /rhev, but instead 
> >>> something FHS-compliant (RHBZ#1369102). Once that is resolved, pretty 
> >>> much the entire stack can go into Fedora. And then you benefit from the 
> >>> Fedora community being able to use, test, and contribute to the oVirt 
> >>> project. As it stands, why would anyone do this for you when you don't 
> >>> even run on the cutting edge platform that feeds into Red Hat Enterprise 
> >>> Linux?
> >>
> >> This was actually fixed a long time ago. With this commit:
> >> https://github.com/oVirt/vdsm/commit/67ba9c4bc860840d6e103fe604b16f494f60a09d
> >>
> >> You can configure a compatible vdsm that does not use /rhev.
> >>
> >> Of course it is not backward compatible, for this we need much more
> >> work to support live migration
> >> between old and new vdsm using different data-center configurations.
> >>
> >
> > It'd probably be simpler to just *change* it to an FHS-compatible path
> > going forward with EL8 and Fedora and set up a migration path there,
> > but it's a bit late for that... :(
>
> It wouldn’t. We always support live migration across several versions (now 
> it’s 4.2-4.4) and it needs to stay the same or youo have to go with arcane 
> code to mangle it back and forth which gets a bit ugly when you consider 
> suspend/resume, snapshots, etc
>

Erk. At some point you need to bite the bullet though...



--
真実はいつも一つ!/ Always, there's only one truth!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7WTTO72M3K62OQW4SE56KWEXF2YTRYC/


[ovirt-users] Host "Unassigned Logical Networks" list is wrong

2020-05-19 Thread Matthias Leopold

Hi,

I'm having a special issue with assigning logical networks to host 
interfaces via label.
I'm used to assigning logical networks (tagged VLANs, VM network flag) a 
"Network label", which I can choose from a drop down menu in the UI 
dialog. These networks are automatically flagged "assign" and "require" 
in my cluster and I expect them to be synced to hosts physical interface 
to which I "drag and dropped" the label. This always seemed to work and 
I never worried.


Now I noticed that when I look at the host "Setup Host Networks" dialog 
again, that the last(?) logical networks I provisioned as explained 
above show up as "Unassigned Logical Networks" with a mouse over text of 
"Network should be assigned to 'foo' via label 'bar'. However for some 
reason it isn't". This has to be some presentation error, because the 
networks are in fact assigned which is also visible in the hosts 
"Network Interfaces" tab.


The "Sync All Network" buttons in "Host" - "Network Interfaces" and 
"Cluster" - "Logical Networks" tabs are inactive.

When hosts are put to maintenance and activated again the error disappears.

My oVirt version is 4.3.7.
Cluster switch type is "Linux Bridge".

This may seem a minor error, but since it affects my production clusters 
and a couple of VLANs I can't afford to play around with host network 
configuration. Can anybody explain this? Any help would be appreciated.


thx
Matthias


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/75FXB6SP3GSX4NIORCT6OM3HSWP56JPY/


[ovirt-users] Re: Upgrade Memory of oVirt Nodes

2020-05-19 Thread Strahil Nikolov via Users
On May 19, 2020 11:16:35 AM GMT+03:00, souvaliotima...@mail.com wrote:
>Hello everyone, 
>
>I have an oVirt 4.3.2.5 hyperconverged 3 node production environment
>and we want to add some RAM to it. 
>
>Can I upgrade the RAM without my users noticing any disruptions and
>keep the VMs running?
>
>The way I thought I should do it was to migrate any running VMs to the
>other nodes, then set one node in maintenance mode, shut it down, place
>the new memory, bring it back up, remove it from maintenance mode and
>see how the installation reacts and repeat for the other two nodes. Is
>this correct or should I follow another way?
>
>Will there be a problem during the time when the nodes will not be
>identical in their resources?
>
>Thank you for your time,
>Souvalioti Maria 
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4E6DMLL23QU6KGMUVUNRGNR3IUYCT5W/

There is no requirement that your nodes should have the same ammount of RAM.
At least my setup doesn't have the same RAM on each node.

The problem with nonequal nodes is the scheduling policy, which reminds me to 
ask if there are any guides for seting a scheduling policy based on memory 
usage ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WHPEVNYCE2BXEVVKTBIQDOO5D7LG5VYO/


[ovirt-users] Re: [Feedback needed] oVirt 4.4.0 Test week

2020-05-19 Thread Jiří Sléžka
Hi,

On 5/18/20 6:26 PM, Sandro Bonazzola wrote:
> 
> 
> Il giorno lun 18 mag 2020 alle ore 18:12 Jiří Sléžka  > ha scritto:
> 
> Hi,
> 
> I am a bit late but today a tried to install single host HCI with
> gluster.
> 
> I am not sure if it is currently supported but
> 
> * I installed
> 
> dnf install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
> dnf module enable -y javapackages-tools pki-deps postgresql:12 389-ds
> 
> 
> these modules are not needed on the host, these are needed only on the
> standalone engine / within engine appliance

ok, thanks for info

> dnf install ovirt-engine-appliance vdsm-gluster
> 
> 
> May I ask which installation guide are you following? I'm pretty sure
> it's outdated and needs a refresh.

I have followed mostly

https://ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide.html

but also my own notes...

I know this guide is for 4.3 but I didn't find relevant guide for 4.4.

> * I prepared host by gdeploy (from external Fedora workstation because I
> didn't find gdeploy in any CentOS8 repo)
> 
> 
> gdeploy has been deprecated in favor of gluster-ansible roles about 2
> years ago

ok, I told you I am bit late ;-)

If there is more relevant documentation, please share a link. I would
like help write some guide/blogpost about 4.4. install but probably in
czech language...

Thanks in advance,

Jiri

> * I tried to instal ovvirt via
> 
> ovirt-hosted-engine-setup
> 
> ...but process failed with
> 
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Unable to start service libvirtd: Job for libvirtd.service failed
> because the control process exited with error code.\nSee \"systemctl
> status libvirtd.service\" and \"journalctl -xe\" for details.\n"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> 
> In messages log is mentioned
> 
> ...
> May 18 17:04:40 ovirt-hci01 journal[8062]: Cannot read CA certificate
> '/etc/pki/CA/cacert.pem': No such file or directory
> May 18 17:04:40 ovirt-hci01 systemd[1]: libvirtd.service: Main process
> exited, code=exited, status=6/NOTCONFIGURED
> ...
> 
> There is no /etc/pki/CA directory. I think CA should be installed during
> ovirt-hosted-engine-setup, isn't it? Should I do that manually? How?
> 
> Cheers,
> 
> Jiri
> 
> 
> On 5/8/20 1:23 PM, Sandro Bonazzola wrote:
> > Hi,
> > oVirt team is planning to release oVirt 4.4.0 Ga in the next couple of
> > weeks.
> > oVirt 4.4.0 release candidate was released yesterday and we'd like to
> > gather as much feedback as possible.
> > Please join us testing this release candidate next week, starting
> Sunday
> > May 10th 2020 till Friday May 15th 2020!
> > We are going to coordinate the testing effort with a public Trello
> board
> > at https://trello.com/b/5ZNJgPC3/ovirt-440-test-day
> > You'll find instructions on how to use the board there.
> > For joining the board you can use this
> >
> link: 
> https://trello.com/invite/b/5ZNJgPC3/f1b1826ee4902f348c44607765a15099/ovirt-440-test-day
>  
> >
> > If you have not an environment dedicated to testing, remember you can
> > set up a few VMs and test the deployment with nested virtualization
> > using your production environment creating a virtual test environment.
> > In this case please be careful avoiding touching the production
> > environment from the testing one.
> >
> > The oVirt team will monitor the Trello board, the #ovirt IRC
> channel on
> > irc.oftc.net   server
> and the users@ovirt.org 
> > > mailing list to
> assist with the testing and
> > debugging issues.
> > Basic instructions for setting up a minimal system are available in
> > release candidate announce
> >
> at: 
> https://lists.ovirt.org/archives/list/annou...@ovirt.org/message/3QORBKVKTALNJ5SMJHEDO4QJ5YUCULTT/attachment/3/attachment.html
> > Release notes for this release candidate are available
> > here: https://ovirt.org/release/4.4.0/
> >
> > Thanks
> > --
> >
> > Sandro Bonazzola
> >
> > MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> >
> > Red Hat EMEA 
> >
> > sbona...@redhat.com 
> >   
> >
> >      
> > |Our code is open_ 
> >
> > **
> > *Red Hat respects your work life balance. Therefore there is no
> need to
> > answer this email out of your office hours.
> > *
> >
> > __

[ovirt-users] Upgrade Memory of oVirt Nodes

2020-05-19 Thread souvaliotimaria
Hello everyone, 

I have an oVirt 4.3.2.5 hyperconverged 3 node production environment and we 
want to add some RAM to it. 

Can I upgrade the RAM without my users noticing any disruptions and keep the 
VMs running?

The way I thought I should do it was to migrate any running VMs to the other 
nodes, then set one node in maintenance mode, shut it down, place the new 
memory, bring it back up, remove it from maintenance mode and see how the 
installation reacts and repeat for the other two nodes. Is this correct or 
should I follow another way?

Will there be a problem during the time when the nodes will not be identical in 
their resources?

Thank you for your time,
Souvalioti Maria 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4E6DMLL23QU6KGMUVUNRGNR3IUYCT5W/