Re: [ovirt-users] Can't move/copy VM disks between Data Centers

2018-03-04 Thread Andrei V
Hi,

On 02/27/2018 04:29 PM, Fred Rolland wrote:
> Hi,
>
> Just to make clear what you want to achieve:
> - DC1 - local storage - host1 - VMs
> - DC2 - local storage - host2
>
> You want to move the VMs from DC1 to DC2.

Yes, thanks, this is exactly what I want to accomplish.
BTW, are export domains from different data centers visible to each other?

If not, wouldn't be simpler to export VM#1 in DC #1 to Export domain #1,
copy over ssh to DC #2 Export domain, and finally import it into DC #2.

PS. I can't test right now myself, sitting home on sick leave..

>
> What you can do:
> - Add a shared storage domain to the DC#1
> - Move VM disk from local SD to shared storage domain
> - Put shared storage domain to maintenance
> - Detach shared storage from DC1
> - Attach shared storage to DC2
> - Activate shared storage
> - You should be able to register the VM from the shared storage into
> the DC2
> - If you want/need move disks from shared storage to local storage in DC2
>
> Please test this flow with a dummy VM before doing on important VMs.
>
> Regards,
>
> Freddy
>
> On Mon, Feb 26, 2018 at 1:46 PM, Andrei Verovski <andre...@starlett.lv
> <mailto:andre...@starlett.lv>> wrote:
>
> Hi,
>
> Thanks for clarification. I’m using 4.2.
> Anyway, I have to define another data center with shared storage
> domain (since data center with local storage domain can have only
> 1 host), and the do what you have described.
>
> Is it possible to copy VM disks from 1 data center #1 local
> storage domain to another data center #2 NFS storage domain, or
> need to use export storage domain ?
>
>
>
>> On 26 Feb 2018, at 13:30, Fred Rolland <froll...@redhat.com
>> <mailto:froll...@redhat.com>> wrote:
>>
>> Hi,
>> Which version are you using?
>>
>> in 4.1 , the support of adding shared storage to local DC was
>> added [1].
>> You can copy/move disks to the shared storage domain, then detach
>> the SD and attach to another DC.
>>
>> In any case, you wont be able to live migrate VMs from the local
>> DC, it is not supported.
>>
>> Regards,
>> Fred
>>
>> [1]
>> 
>> https://ovirt.org/develop/release-management/features/storage/sharedStorageDomainsAttachedToLocalDC/
>> 
>> <https://ovirt.org/develop/release-management/features/storage/sharedStorageDomainsAttachedToLocalDC/>
>>
>> On Fri, Feb 23, 2018 at 1:35 PM, Andrei V <andre...@starlett.lv
>> <mailto:andre...@starlett.lv>> wrote:
>>
>> Hi,
>>
>> I have oVirt setup, separate PC host engine + 2 nodes (#10 +
>> #11) with local storage domains (internal RAIDs).
>> 1st node #10 is currently active and can’t be turned off.
>>
>> Since oVirt doesn’t support more then 1 host in data center
>> with local storage domain as described here:
>> http://lists.ovirt.org/pipermail/users/2018-January/086118.html
>> <http://lists.ovirt.org/pipermail/users/2018-January/086118.html>
>> defined another data center with 1 node #11.
>>
>> Problem: 
>> 1) can’t copy or move VM disks from node #10 (even of
>> inactive VMs) to node #11, this node is NOT being shown as
>> possible destination.
>> 2) can’t migrate active VMs to node #11.
>> 3) Added NFS shares to data center #1 -> node #10, but can’t
>> change data center #1 -> storage type to Shared, because this
>> operation requires detachment of local storage domains, which
>> is not possible, several VMs are active and can’t be stopped.
>>
>> VM disks placed on local storage domains because of
>> performance limitations of our 1Gbit network. 
>> 2 VMs running our accounting/inventory control system, and
>> are critical to NFS storage performance limits.
>>
>> How to solve this problem ?
>> Thanks in advance.
>>
>> Andrei
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>> <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Can't move/copy VM disks between Data Centers

2018-02-23 Thread Andrei V
Hi,

I have oVirt setup, separate PC host engine + 2 nodes (#10 + #11) with local 
storage domains (internal RAIDs).
1st node #10 is currently active and can’t be turned off.

Since oVirt doesn’t support more then 1 host in data center with local storage 
domain as described here:
http://lists.ovirt.org/pipermail/users/2018-January/086118.html 

defined another data center with 1 node #11.

Problem: 
1) can’t copy or move VM disks from node #10 (even of inactive VMs) to node 
#11, this node is NOT being shown as possible destination.
2) can’t migrate active VMs to node #11.
3) Added NFS shares to data center #1 -> node #10, but can’t change data center 
#1 -> storage type to Shared, because this operation requires detachment of 
local storage domains, which is not possible, several VMs are active and can’t 
be stopped.

VM disks placed on local storage domains because of performance limitations of 
our 1Gbit network. 
2 VMs running our accounting/inventory control system, and are critical to NFS 
storage performance limits.

How to solve this problem ?
Thanks in advance.

Andrei

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Requirements for basic host

2018-02-16 Thread Andrei V

Nope, you don’t need additional engine.


> On 16 Feb 2018, at 15:45, Mark Steele  wrote:
> 
> Hello again,
> 
> I'm building a new host for my cluster and have a quick question about 
> required software for joining the host to my cluster.
> 
> In my notes from a previous colleague, I am instructed to do the following:
> 
> yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm 
> 
> yum install ovirt-hosted-engine-setup
> hosted-engine --deploy
> We already have a HostedEngine running on another server in the cluster - so 
> do I need to install ovirt-hosted-engine-setup and then deploy it for this 
> server to join the cluster and operate properly?
> 
> As always - thank you for your time.
> 
> ***
> Mark Steele
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
> 800.885.8886 x128 | mste...@telvue.com  | 
> http://www.telvue.com 
> twitter: http://twitter.com/telvue  | facebook: 
> https://www.facebook.com/telvue 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Sparsify in 4.2 - where it moved ?

2018-02-15 Thread Andrei V
Hi !


I can’t locate “Sparsify” disk image command anywhere in oVirt 4.2.
Where it have been moved ?


Thanks
Andrei


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: Upgrade 4.2 -> 4.2.1 Dependency Problem

2018-02-14 Thread Andrei V
Hi !

I run into unexpected problem upgrading oVirt node (installed manually on 
CentOS):
This problem have to be fixed manually otherwise upgrade command from host 
engine also fail.

-> glusterfs-rdma = 3.12.5-2.el7
was installed manually as a dependency resolution for 
ovirt-host-4.2.1-1.el7.centos.x86_64

Q: How to get around this problem? Thanks in advance.


Error: Package: ovirt-host-4.2.1-1.el7.centos.x86_64 (ovirt-4.2)
   Requires: glusterfs-rdma
   Removing: glusterfs-rdma-3.12.5-2.el7.x86_64 
(@ovirt-4.2-centos-gluster312)
   glusterfs-rdma = 3.12.5-2.el7
   Obsoleted By: 
mlnx-ofa_kernel-3.4-OFED.3.4.2.1.5.1.ged26eb5.1.rhel7u3.x86_64 (HP-spp)
   Not found
   Available: glusterfs-rdma-3.8.4-18.4.el7.centos.x86_64 (base)
   glusterfs-rdma = 3.8.4-18.4.el7.centos
   Available: glusterfs-rdma-3.12.0-1.el7.x86_64 
(ovirt-4.2-centos-gluster312)
   glusterfs-rdma = 3.12.0-1.el7
   Available: glusterfs-rdma-3.12.1-1.el7.x86_64 
(ovirt-4.2-centos-gluster312)
   glusterfs-rdma = 3.12.1-1.el7
   Available: glusterfs-rdma-3.12.1-2.el7.x86_64 
(ovirt-4.2-centos-gluster312)
   glusterfs-rdma = 3.12.1-2.el7
   Available: glusterfs-rdma-3.12.3-1.el7.x86_64 
(ovirt-4.2-centos-gluster312)
   glusterfs-rdma = 3.12.3-1.el7
   Available: glusterfs-rdma-3.12.4-1.el7.x86_64 
(ovirt-4.2-centos-gluster312)
   glusterfs-rdma = 3.12.4-1.el7



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Maximum time node can be offline.

2018-02-11 Thread Andrei V
On 02/10/2018 05:47 PM, Thomas Letherby wrote:
> That's exactly what I needed to know, thanks all.
>
> I'll schedule a script for the nodes to reboot and patch once every week or
> two and then I can let it run without me needing to worry about it.

Is this shell or python script with connection to oVirt engine? My 4.2
node don't shutdown properly until I'm shutdown all VMs take it into
maintenance mode manually. Shutdown process freezes at certain point.

>
> Thomas
>
> On Fri, Feb 9, 2018, 2:26 AM Martin Sivak  wrote:
>
>> Hi,
>>
>> the hosts are almost stateless and we set up most of what is needed
>> during activation. Hosted engine has some configuration stored
>> locally, but that is just the path to the storage domain.
>>
>> I think you should be fine unless you change the network topology
>> significantly. I would also install security updates once in while.
>>
>> We can even shut down the hosts for you when you configure two cluster
>> scheduling properties: EnableAutomaticPM and HostsInReserve.
>> HostsInReserve should be at least 1 though. It behaves like this, as
>> long as the reserve host is empty, we shut down all the other empty
>> hosts. And we boot another host once a VM does not fit on other used
>> hosts and is places on the running reserve host. That would save you
>> the power of just one host, but it would still be highly available (if
>> hosted engine and storage allows that too).
>>
>> Bear in mind that single host cluster is not highly available at all.
>>
>> Best regards
>>
>> Martin Sivak
>>
>> On Fri, Feb 9, 2018 at 8:25 AM, Gianluca Cecchi
>>  wrote:
>>> On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby 
>> wrote:
 Thanks, that answers my follow up question! :)

 My concern is that I could have a host off-line for a month say, is that
 going to cause any issues?

 Thanks,

 Thomas

>>> I think that if in the mean time you don't make any configuration changes
>>> and you don't update anything, there is no reason to have problems.
>>> In case of changes done, it could depend on what they are: are you
>> thinking
>>> about any particular scenario?
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt CLI Question

2018-02-07 Thread Andrei V
Hi,

How to force power off, and then launch (after timeout e.g. 20sec)
particular VM from bash or Python script?

Is 20sec is enough to get oVirt engine updated after forced power off ?

What happened with this wiki? Seems like it is deleted or moved.
http://wiki.ovirt.org/wiki/CLI#Usage

Is this project part of oVirt distro? It looks like in state of active
development with last updates 2 months ago.
https://github.com/fbacchella/ovirtcmd

Thanks !
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem - Ubuntu 16.04.3 Guest Weekly Freezes

2018-02-06 Thread Andrei V
Hi !

I have strange and annoying problem with one VM on oVirt node 4.2 - weekly 
freezes of Ubuntu 16.04.3 with ISPConfig 3.1 active.
ISPConfig is a Web GUI frontend (written in PHP) to Apache, Postfix, Dovecot, 
Amavis, Clam and ProFTPd.
Separate engine PC, not hosted engine.

Ubuntu 16.04.3 LTS (Xenial Xerus), 2 cores allocated, 8 GB RAM (only fraction 
is being used).
kernel 4.13.0-32-generic
6300ESB Watchdog Timer

Memory ballooning disables, and there are always about 7 GB of free RAM left.
4 VMs active, CPU load on node is low.
Tried several kernel versions, no change.

I can’t trace any problem in the log on Ubuntu guest. Even watchdog timer 
6300ESB configured to reset does nothing (what is really strange).
VM stops responding even to pings, VM screen is also frozen.
oVirt engine don’t display IP address anymore, it means ovirt-guest-agent is 
dead.

VM is in DMZ, and not connected to ovirtmgmt, but rather to bridged Ethernet 
interface.
in oVirt I have defined network "DMZ Node10-NIC2”.
On node:
cd /etc/sysconfig/network-scripts/
tail ifcfg-enp3s4f1

DEVICE=enp3s4f1
BRIDGE=ond04ad91e59c14
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

Googling doesn’t show anything useful except attempt to change kernel version 
what I already did.

1) Any idea how to fix this freeze ?

2) While problem is not fixed, I can create cron script  to handle stubborn VM 
on oVirt engine PC. 
Q: How to force power off, and then launch (after timeout e.g. 20sec) this VM 
from bash or Python script?

Thanks in advance for any help.
Andrei

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Re 2: q: lldpad.service needed for oVirt node (bug # 1401409) ?

2018-01-22 Thread Andrei V
On 01/22/2018 07:15 PM, Dominik Holler wrote:
> Yes, the service is required to provide LLDP information and is a
> dependency of the vdsm-network package.
> Is this uncomfortable for you?

It crashes on my CentOS 7.

Node have 2 bridged interfaces, and its seems this bug recorded here.
https://bugzilla.redhat.com/show_bug.cgi?id=1401409

[root@node10 ~]# journalctl -u lldpad.service
-- Logs begin at P  2018-01-22 18:28:47 EET, end at P  2018-01-22
21:24:00 EET. --
jan 22 18:29:40 node10.domain.com systemd[1]: Started Link Layer
Discovery Protocol Agent Daemon..
jan 22 18:29:40 node10.domain.com systemd[1]: Starting Link Layer
Discovery Protocol Agent Daemon
jan 22 18:32:34 node10.domain.com lldpad[1821]: recvfrom(Event
interface): No buffer space available
jan 22 18:32:35 node10.domain.com lldpad[1821]: recvfrom(Event
interface): No buffer space available
jan 22 18:32:50 node10.domain.com lldpad[1821]: recvfrom(Event
interface): No buffer space available
jan 22 18:32:55 node10.domain.com lldpad[1821]: recvfrom(Event
interface): No buffer space available
jan 22 19:08:34 node10.domain.com lldpad[1821]: lldpad: lldp/rx.c:142:
rxProcessFrame: Assertion `agent->rx.framein && agent->rx
jan 22 19:08:34 node10.domain.com systemd[1]: lldpad.service: main
process exited, code=dumped, status=6/ABRT
jan 22 19:08:34 node10.domain.com systemd[1]: Unit lldpad.service
entered failed state.
jan 22 19:08:34 node10.domain.com systemd[1]: lldpad.service failed.


abrt-cli list --since 1516639049
id dfc13676b38d3173315ac7c1c4c71a9d82acd9c7
reason: lldpad killed by SIGABRT
time:   piektdiena, 2018. gada 12. janvāris, plkst. 00 un 28
cmdline:    /usr/sbin/lldpad -t
package:    lldpad-1.0.1-3.git036e314.el7
uid:    0 (root)
count:  4
Directory:  /var/tmp/abrt/ccpp-2018-01-12-00:28:02-1779


> On Mon, 22 Jan 2018 11:31:39 +0200
> Andrei V <andre...@starlett.lv> wrote:
>
>> Hi,
>>
>> Does lldpad.service (Link Layer Discovery Protocol) needed for oVirt
>> node ?
>>
>> Thanks
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] q: lldpad.service needed for oVirt node ?

2018-01-22 Thread Andrei V
On 01/22/2018 07:15 PM, Dominik Holler wrote:
> Yes, the service is required to provide LLDP information and is a
> dependency of the vdsm-network package.
> Is this uncomfortable for you?

It crashes on my CentOS 7

abrt-cli list --since 1516639049
id dfc13676b38d3173315ac7c1c4c71a9d82acd9c7
reason: lldpad killed by SIGABRT
time:   piektdiena, 2018. gada 12. janvāris, plkst. 00 un 28
cmdline:    /usr/sbin/lldpad -t
package:    lldpad-1.0.1-3.git036e314.el7
uid:    0 (root)
count:  4
Directory:  /var/tmp/abrt/ccpp-2018-01-12-00:28:02-1779


> On Mon, 22 Jan 2018 11:31:39 +0200
> Andrei V <andre...@starlett.lv> wrote:
>
>> Hi,
>>
>> Does lldpad.service (Link Layer Discovery Protocol) needed for oVirt
>> node ?
>>
>> Thanks
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New post on oVirt blog: Build oVirt Reports Using Grafana

2018-01-22 Thread Andrei V
On 01/22/2018 06:37 PM, Roy Golan wrote:
> We ship postgres 9.5 through the Software Collection. Its missing from your
> path by default, so in order to run psql you need to:
>
> su - postgres
> scl enable rh-postgresql95 -- psql engine engine

scl enable rh-postgresql95 -- psql engine engine
psql: FATAL:  Peer authentication failed for user "engine"

Sorry, should it be:?
scl enable rh-postgresql95 -- psql engine

Do I need to re-enter everything for read-only oVirt DWH user ?

>
> On Mon, 22 Jan 2018 at 18:19 <andre...@starlett.lv> wrote:
>
>> On 22 Jan 2018, at 18:09, Mikhail Krasnobaev <mi...@ya.ru> wrote:
>>
>> Good day,
>>
>> i had the same issue, you can try running the commands directly from
>> postgres user:
>> su - postgres
>> And the run the commands from blog post.
>>
>>
>> I did everything as user postgres, otherwise it doesn’t work at all.
>>
>>
>> Best regards,
>>
>> Mikhail.
>>
>> 17:58, 22 января 2018 г., Andrei V <andre...@starlett.lv>:
>>
>> Hi !
>>
>> Thanks a lot for excellent addition !
>>
>> I’ve got a problem with connecting Grafana to oVirt 4.2
>> Followed this instruction and created read-only user usrmonovirt
>> #
>> https://www.ovirt.org/documentation/data-warehouse/Allowing_Read_Only_Access_to_the_History_Database/
>>
>> However, from Grafana login is is always failed:
>> pq: Ident authentication failed for user “usrmonovirt"
>> host = localhost:5432, database = ovirt_engine_history, SSL = disable
>>
>> Reset password with another or even NULL don’t help, still can’t connect.
>> psql -U postgres -c "ALTER ROLE usrmonovirt WITH PASSWORD ‘mypassword';"
>> -d ovirt_engine_history
>> psql -U postgres -c "ALTER ROLE usrmonovirt WITH PASSWORD NULL;”  -d
>> ovirt_engine_history
>>
>> What went wrong? I can’t understand.
>>
>> PS. Looks like oVirt engine spawns PostgreSQL with its own data dir,
>> standard restart via systemcll returns an error (data dir is empty). Yet
>> "ps -aux | grep postgres" shows process is running.
>>
>> Thanks in advance for any suggestion(s).
>>
>>
>> On 21 Jan 2018, at 14:21, Shirly Radco <sra...@redhat.com> wrote:
>>
>> Hello everyone,
>>
>> A new oVirt blog post has been published, on how to build oVirt reports
>> using Grafana.
>> This allows connecting to oVirt DWH and create dashboards for System,
>> Hosts, VMs, Storage domains etc.
>>
>> See https://ovirt.org/blog/2018/01/ovirt-report-using-grafana/ for the
>> full post.
>>
>> For more information you can contact me.
>>
>> Best regards,
>>
>> --
>>
>> SHIRLY RADCO
>>
>> BI SeNIOR SOFTWARE ENGINEER
>> Red Hat Israel <https://www.redhat.com/>
>> <https://red.ht/sig>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> Sent from Yandex.Mail for mobile: http://m.ya.ru/ymail
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New post on oVirt blog: Build oVirt Reports Using Grafana

2018-01-22 Thread Andrei V
Hi !

Thanks a lot for excellent addition !

I’ve got a problem with connecting Grafana to oVirt 4.2
Followed this instruction and created read-only user usrmonovirt
# 
https://www.ovirt.org/documentation/data-warehouse/Allowing_Read_Only_Access_to_the_History_Database/

However, from Grafana login is is always failed:
pq: Ident authentication failed for user “usrmonovirt"
host = localhost:5432, database = ovirt_engine_history, SSL = disable

Reset password with another or even NULL don’t help, still can’t connect.
psql -U postgres -c "ALTER ROLE usrmonovirt WITH PASSWORD ‘mypassword';" -d 
ovirt_engine_history
psql -U postgres -c "ALTER ROLE usrmonovirt WITH PASSWORD NULL;”  -d 
ovirt_engine_history

What went wrong? I can’t understand.

PS. Looks like oVirt engine spawns PostgreSQL with its own data dir, standard 
restart via systemcll returns an error (data dir is empty). Yet "ps -aux | grep 
postgres" shows process is running.

Thanks in advance for any suggestion(s).


> On 21 Jan 2018, at 14:21, Shirly Radco  wrote:
> 
> Hello everyone, 
> 
> A new oVirt blog post has been published, on how to build oVirt reports using 
> Grafana.
> This allows connecting to oVirt DWH and create dashboards for System, Hosts, 
> VMs, Storage domains etc.
> 
> See https://ovirt.org/blog/2018/01/ovirt-report-using-grafana/ 
>  for the full 
> post.
> 
> For more information you can contact me.
> 
> Best regards,
> --
> SHIRLY RADCO
> BI SENIOR SOFTWARE ENGINEER
> Red Hat Israel 
>   
> TRIED. TESTED. TRUSTED. 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: Core Reservation on Hyperthreaded CPU

2018-01-22 Thread Andrei V
Hi,


I’m currently running node on HP ProLiant with 2 x Xeon with 4 core each.
Because of hyperthreading each physical core is being seen as 2.
How KVM/oVirt reserves cores if for example I allocate 4 CPU cores for VM ?
Does it allocates 4 real CPU cores, or 2 cores with 2 threads each ?

Each VM consumes very low CPU resources.
What happens if # of used cores consumed by VMs will exceed real count of 
physical CPU cores ?

Thanks.
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] q: lldpad.service needed for oVirt node ?

2018-01-22 Thread Andrei V
Hi,

Does lldpad.service (Link Layer Discovery Protocol) needed for oVirt node ?

Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Zombie kworker process on node took 100% CPU load

2018-01-18 Thread Andrei V
Hi,

I've got strange problem with oVirt Node 4.2 installed on CentOS 7 with
mainline 4.14 kernel & HP ProLiant ML350.

Kworker process loaded 1 cpu CPU core at 100%, and since it didn't
finish even after several hours, it was clearly a zombie.
I logged into each VM, and powered them off with "shutdown now". From
SPICE console it was seen VMs halted.

However, in oVirt engine web they have been marked with tripple green
down arrow. An attempt to shutdown them completely with oVirt button
resulted in error message something like "Can't shutdown VM because
highly available and can't be moved". Sorry, I forgot to take screenshot.
Its was also impossible to shutdown the node correctly, process freezes.
After restart, everything seem to work correctly.

Anyone have any idea how to cope with this kind of problem?

Thanks in advance
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [CORRECTED] Can't create 2nd network for VMs in DMZ zone

2018-01-07 Thread Andrei V
Hi !

Made a typo in previous message, sorry.

I'm having difficulty to utilize 2nd NIC on HP Proliant server in order
to connect guest VM to DMZ (engine, node and all other guest VMs are on
internal zone).

I tried to use PCI passthrough, downed ifcfg-enp3s4f1 on node host,
created vNic profile "PCI NIC Passthrough", network named "Node10-NIC2",
then Virtual Machines -> Network Interfaces -> nic 2 of type PCI
Passthrough and profile above.
Unfortunately, guest OS (SuSE Leap 42.3) still don't recognize this NIC.

This 2nd NIC is visible on Compute -> Hosts -> Network Interfaces.
I can assign it network named "Node10-NIC2". However, right now I'm
working from the remote, if node net setup get screwed, it will be a big
problem.
In the archive I found that this 2nd NIC must be up with no IP and
default route. Is this correct definition of ifcfg-enp3s4f1?
NetworkManager is disabled on node.

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NM_CONTROLLED=no
NAME=enp3s4f1
UUID=09f96366-e4a9-4e6e-a090-7267ed102d36
DEVICE=enp3s4f1
ONBOOT=yes

Thanks in advance
Andrei V
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Can't create 2nd network for VMs in DMZ zone

2018-01-07 Thread Andrei V
Hi !

I'm having difficulty to utilize 2nd NIC on HP Proliant server in order
to connect guest VM to DMZ (engine, node and all other guest VMs are on
internal zone).

I tried to use PCI passthrough, downed ifcfg-enp3s4f1 on node host,
created vNic profile "PCI NIC Passthrough", network named "Node10-NIC2",
then Virtual Machines -> Network Interfaces -> nic 2 of type PCI
Passthrough and profile above.
Unfortunately, guest OS (SuSE Leap 42.3) still don't recognize this NIC.

This 2nd NIC is visible on Compute -> Hosts -> Network Interfaces.
I can assign it network named "Node10-NIC2". However, right now I'm
working from the remote, if node net setup get screwed, it will be a big
problem.
In the archive I found that this 2nd NIC must be up with no IP and
default route. Is this correct definition of ifcfg-enp3s4f1?
NetworkManager is disabled on node.

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NM_CONTROLLED=no
NAME=enp3s4f1
UUID=09f96366-e4a9-4e6e-a090-7267ed102d36
DEVICE=enp3s4f1
ONBOOT=yes

Thanks in advance
Andrei V
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: GlusterFS and libgfapi

2018-01-01 Thread Andrei V
Hi,

Is this correct that starting from oVirt 4.2 GlusterFS storage domain is
being used with libgfapi instead of FUSE (based on info linked below)?

https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/

https://gerrit.ovirt.org/#/c/44061/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: 2-Node Failover Setup - NFS or GlusterFS ?

2018-01-01 Thread Andrei V
On 01/01/2018 10:10 AM, Yaniv Kaul wrote:
>
> On Mon, Jan 1, 2018 at 12:50 AM, Andrei V <andre...@starlett.lv
> <mailto:andre...@starlett.lv>> wrote:
>
> Hi !
>
> I'm installing 2-node failover cluster (2 x Xeon servers with
> local RAID
> 5 / ext4 for oVirt storage domains).
> Now I have a dilemma - use either GlusterFS replica 2 or stick
> with NFS?
>
>
> Replica 2 is not good enough, as it can leave you with split brain.
> It's been discussed in the mailing list several times.
> How do you plan to achieve HA with NFS? With drbd?
Hi, Yaniv,
Thanks a lot for detailed explanation!

I know Replica 2 is not optimal solution.
Right now I have only 2 servers with internal RAIDs for nodes, and till
end of this week system had to be running in whatever condition.
May be its better to use local storage domain on each node, set export
domain on backup node, and backup VMs to 2nd backup node in timed interval?
Its not highly-available yet workable solution.

> 4.2 Engine is running on separate hardware.
>
>
> Is the Engine also highly available?

Its KVM appliance, could be launched on 2 SuSE servers.

> Each node have its own storage domain (on internal RAID).
>
>
> So some sort of replica 1 with geo-replication between them?

Could it be the following?
1) Local storage domain on each node
2) GlusterFS geo-replication or over these directories? Not sure this
will work.

>
> All VMs must be highly available.
>
>
> Without shared storage, it may be tricky.

Seems to be timely VM backup to 2nd node is enough for this time.
With current hardware anything above is too cumbersome to setup.

>
> One of the VMs is an accounting/stock control system with FireBird SQL
> server on CentOS is speed-critical.
>
>
> But is IO the bottleneck? Are you using SSDs / NVMe drives? 
> I'm not familiar enough with FireBird SQL server - does it have an
> application layer replication you might opt to use?
> In such case, you could pass-through a NVM disk and have the
> application layer perform the replication between the nodes.
>  
>
> No load balancing between nodes necessary. 2nd is just for backup
> if 1st
> for whatever reason goes up in smoke. All VM disks must be
> replicated to
> backup node in near real-time or in worst case each 1 - 2 hour.
> GlusterFS solves this issue yet at high performance penalty.
>
>
> The problem with a passive backup is that you never know it'll really
> work when needed. This is why active-active is many time preferred.
> It's also more cost effective usually - instead of some HW lying around.
>  
>
>
> >From what I read here
> http://lists.ovirt.org/pipermail/users/2017-July/083144.html
> <http://lists.ovirt.org/pipermail/users/2017-July/083144.html>
> GlusterFS performance with oVirt is not very good right now
> because QEMU
> uses FUSE instead of libgfapi.
>
> What is optimal way to go on ?
>
>
> It's hard to answer without additional details.
> Y.
>  
>
> Thanks in advance.
> Andrei
>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: 2-Node Failover Setup - NFS or GlusterFS ?

2017-12-31 Thread Andrei V
Hi !

I'm installing 2-node failover cluster (2 x Xeon servers with local RAID
5 / ext4 for oVirt storage domains).
Now I have a dilemma - use either GlusterFS replica 2 or stick with NFS?

4.2 Engine is running on separate hardware.
Each node have its own storage domain (on internal RAID).

All VMs must be highly available.
One of the VMs is an accounting/stock control system with FireBird SQL
server on CentOS is speed-critical.
No load balancing between nodes necessary. 2nd is just for backup if 1st
for whatever reason goes up in smoke. All VM disks must be replicated to
backup node in near real-time or in worst case each 1 - 2 hour.
GlusterFS solves this issue yet at high performance penalty.

>From what I read here
http://lists.ovirt.org/pipermail/users/2017-July/083144.html
GlusterFS performance with oVirt is not very good right now because QEMU
uses FUSE instead of libgfapi.

What is optimal way to go on ?
Thanks in advance.
Andrei

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: oVirt and NFS / GlusterFS High-Availability File Sync Algorithm

2017-12-21 Thread Andrei V
Hi,

I would like to ask how (almost) high-availability file sync algorithm
works with very simple setup:
2-node (1 main and another backup) + engine on separate PC;
each node - Xeon server with RAID L5 for data domain.

With GlusterFS copy of each VM disk(s) is being stored on each node data
domain and kept up-to-date by GlusterFS sync process automatically.
How this works if data domains are on NFS data domains / volumes ?

My current setup is really simple, and don't need to be complicated any
further. 1st (main) node runs all the time, 2nd (backup) activated only
if 1st suffer for example a hardware failure. Anyway, both data domains
linked to 2 nodes must be kept fully synced.
Right now I finished setup only of 1st node (with CentOS 7, not oVirt
node appliance which wipes out all custom software changes upon
upgrade). Node data domain runs on GlusterFS.
Upon finishing setup of 2nd node (with GlusterFS) they will be joined as
Replica 2, and all VMs will be marked as highly available.

Since during normal operation all VM disks are being active only on one
node, and synced to 2nd (backup), I hope to avoid GlusterFS split brain
issue.

Please correct me if I'm wrong here. Quite possible I don't entirely
understand all underlying algorithms.

Thanks in advance
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: oVirt Node Update Wiped my Updates

2017-12-18 Thread Andrei V
Hi !

I have updated today oVirt node 4.1 via yum, installed from oVirt DVD.
Update was a minor version change.
Now all software I had installed, including mc, Samba, etc. have been
lost. Looks like oVirt node is being updated with whole system 600MB
image, not just with rpms (please correct if I'm wrong here).

Keeping my manually installed software is crucial, since it has UPS and
hardware RAID monitor.

Can anyone suggest if this is normal behavior or a single glitch ?

So far I found only this instruction for node installation:
https://www.ovirt.org/documentation/install-guide/chap-oVirt_Nodes/

And it uses pre-made DVD ISO, not manual install with RHEL/CentOS DVD
and RPMs from yum repository.

Thanks in advance for any suggestion(s).
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

2017-12-13 Thread Andrei V
Hi, Donny,

Thanks for the link.

Am I understood correctly that I'm need at least 3-node system to run in
failover mode? So far I'm plan to deploy only 2 nodes, either with
hosted either with bare metal engine.

/The key thing to keep in mind regarding host maintenance and downtime
is that this *converged  three node system relies on having at least two
of the nodes up at all times*. If you bring down  two machines at once,
you'll run afoul of the Gluster quorum rules that guard us from
split-brain states in our storage, the volumes served by your remaining
host will go read-only, and the VMs stored on those volumes will pause
and require a shutdown and restart in order to run again./

What happens if in 2-node glusterfs system (with hosted engine) one node
goes down?
Bare metal engine can manage this situation, but I'm not sure about
hosted engine.


On 12/13/2017 11:17 PM, Donny Davis wrote:
> I would start here
> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
>
> Pretty good basic guidance. 
>
> Also with software defined storage its recommended their are at least
> two "storage" nodes and one arbiter node to maintain quorum. 
>
> On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <andre...@starlett.lv
> <mailto:andre...@starlett.lv>> wrote:
>
> Hi,
>
> I'm going to setup relatively simple 2-node system with oVirt 4.1,
> GlusterFS, and several VMs running.
> Each node going to be installed on dual Xeon system with single
> RAID 5.
>
> oVirt node installer uses relatively simple default partitioning
> scheme.
> Should I leave it as is, or there are better options?
> I never used GlusterFS before, so any expert opinion is very welcome.
>
> Thanks in advance.
> Andrei
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

2017-12-13 Thread Andrei V
Hi,

I'm going to setup relatively simple 2-node system with oVirt 4.1,
GlusterFS, and several VMs running.
Each node going to be installed on dual Xeon system with single RAID 5.

oVirt node installer uses relatively simple default partitioning scheme.
Should I leave it as is, or there are better options?
I never used GlusterFS before, so any expert opinion is very welcome.

Thanks in advance.
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users