> [1]
>>
>> https://ovirt.org/develop/release-management/features/storage/sharedStorageDomainsAttachedToLocalDC/
>>
>> <https://ovirt.org/develop/release-management/features/storage/sharedStorageDomainsAttachedToLocalDC/>
>>
>> On Fri, F
Hi,
I have oVirt setup, separate PC host engine + 2 nodes (#10 + #11) with local
storage domains (internal RAIDs).
1st node #10 is currently active and can’t be turned off.
Since oVirt doesn’t support more then 1 host in data center with local storage
domain as described here:
Nope, you don’t need additional engine.
> On 16 Feb 2018, at 15:45, Mark Steele wrote:
>
> Hello again,
>
> I'm building a new host for my cluster and have a quick question about
> required software for joining the host to my cluster.
>
> In my notes from a previous
Hi !
I can’t locate “Sparsify” disk image command anywhere in oVirt 4.2.
Where it have been moved ?
Thanks
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi !
I run into unexpected problem upgrading oVirt node (installed manually on
CentOS):
This problem have to be fixed manually otherwise upgrade command from host
engine also fail.
-> glusterfs-rdma = 3.12.5-2.el7
was installed manually as a dependency resolution for
On 02/10/2018 05:47 PM, Thomas Letherby wrote:
> That's exactly what I needed to know, thanks all.
>
> I'll schedule a script for the nodes to reboot and patch once every week or
> two and then I can let it run without me needing to worry about it.
Is this shell or python script with connection
Hi,
How to force power off, and then launch (after timeout e.g. 20sec)
particular VM from bash or Python script?
Is 20sec is enough to get oVirt engine updated after forced power off ?
What happened with this wiki? Seems like it is deleted or moved.
http://wiki.ovirt.org/wiki/CLI#Usage
Is this
Hi !
I have strange and annoying problem with one VM on oVirt node 4.2 - weekly
freezes of Ubuntu 16.04.3 with ISPConfig 3.1 active.
ISPConfig is a Web GUI frontend (written in PHP) to Apache, Postfix, Dovecot,
Amavis, Clam and ProFTPd.
Separate engine PC, not hosted engine.
Ubuntu 16.04.3 LTS
t: 4
Directory: /var/tmp/abrt/ccpp-2018-01-12-00:28:02-1779
> On Mon, 22 Jan 2018 11:31:39 +0200
> Andrei V <andre...@starlett.lv> wrote:
>
>> Hi,
>>
>> Does lldpad.service (Link Layer Discovery Protocol) needed for oVirt
>> node ?
>>
>>
8:02-1779
> On Mon, 22 Jan 2018 11:31:39 +0200
> Andrei V <andre...@starlett.lv> wrote:
>
>> Hi,
>>
>> Does lldpad.service (Link Layer Discovery Protocol) needed for oVirt
>> node ?
>>
>> Thanks
>>
>>
>> __
gt;
>>
>> I did everything as user postgres, otherwise it doesn’t work at all.
>>
>>
>> Best regards,
>>
>> Mikhail.
>>
>> 17:58, 22 января 2018 г., Andrei V <andre...@starlett.lv>:
>>
>> Hi !
>>
>> Thanks a lot for excellent
Hi !
Thanks a lot for excellent addition !
I’ve got a problem with connecting Grafana to oVirt 4.2
Followed this instruction and created read-only user usrmonovirt
#
https://www.ovirt.org/documentation/data-warehouse/Allowing_Read_Only_Access_to_the_History_Database/
However, from Grafana
Hi,
I’m currently running node on HP ProLiant with 2 x Xeon with 4 core each.
Because of hyperthreading each physical core is being seen as 2.
How KVM/oVirt reserves cores if for example I allocate 4 CPU cores for VM ?
Does it allocates 4 real CPU cores, or 2 cores with 2 threads each ?
Each VM
Hi,
Does lldpad.service (Link Layer Discovery Protocol) needed for oVirt node ?
Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi,
I've got strange problem with oVirt Node 4.2 installed on CentOS 7 with
mainline 4.14 kernel & HP ProLiant ML350.
Kworker process loaded 1 cpu CPU core at 100%, and since it didn't
finish even after several hours, it was clearly a zombie.
I logged into each VM, and powered them off with
p3s4f1?
NetworkManager is disabled on node.
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NM_CONTROLLED=no
NAME=enp3s4f1
UUID=09f96366-e4a9-4e6e-a090-7267ed102d36
DEVICE=enp3s4f1
ONBOOT=yes
Thanks in advance
Andrei V
___
on node.
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NM_CONTROLLED=no
NAME=enp3s4f1
UUID=09f96366-e4a9-4e6e-a090-7267ed102d36
DEVICE=enp3s4f1
ONBOOT=yes
Thanks in advance
Andrei V
_
Hi,
Is this correct that starting from oVirt 4.2 GlusterFS storage domain is
being used with libgfapi instead of FUSE (based on info linked below)?
https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/
https://gerrit.ovirt.org/#/c/44061/
On 01/01/2018 10:10 AM, Yaniv Kaul wrote:
>
> On Mon, Jan 1, 2018 at 12:50 AM, Andrei V <andre...@starlett.lv
> <mailto:andre...@starlett.lv>> wrote:
>
> Hi !
>
> I'm installing 2-node failover cluster (2 x Xeon servers with
> local RAID
>
Hi !
I'm installing 2-node failover cluster (2 x Xeon servers with local RAID
5 / ext4 for oVirt storage domains).
Now I have a dilemma - use either GlusterFS replica 2 or stick with NFS?
4.2 Engine is running on separate hardware.
Each node have its own storage domain (on internal RAID).
All
Hi,
I would like to ask how (almost) high-availability file sync algorithm
works with very simple setup:
2-node (1 main and another backup) + engine on separate PC;
each node - Xeon server with RAID L5 for data domain.
With GlusterFS copy of each VM disk(s) is being stored on each node data
Hi !
I have updated today oVirt node 4.1 via yum, installed from oVirt DVD.
Update was a minor version change.
Now all software I had installed, including mc, Samba, etc. have been
lost. Looks like oVirt node is being updated with whole system 600MB
image, not just with rpms (please correct if
to maintain quorum.
>
> On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <andre...@starlett.lv
> <mailto:andre...@starlett.lv>> wrote:
>
> Hi,
>
> I'm going to setup relatively simple 2-node system with oVirt 4.1,
> GlusterFS, and several VMs running.
&g
Hi,
I'm going to setup relatively simple 2-node system with oVirt 4.1,
GlusterFS, and several VMs running.
Each node going to be installed on dual Xeon system with single RAID 5.
oVirt node installer uses relatively simple default partitioning scheme.
Should I leave it as is, or there are better
24 matches
Mail list logo