[ovirt-users] oVirt nodes with local storage

2024-01-15 Thread Wild Star
On the weekend I upgraded my self-hosted oVirt engine to 4.56.  All went well 
with that!

I also spotted that there was an update for all my 4.54 nodes, but something 
has changed with the repo overnight because none of my nodes see updates 
anymore, though the update remains available here … 
https://resources.ovirt.org/pub/ovirt-4.5/iso/ovirt-node-ng-installer/.

Yesterday, I attempted to update just one of my nodes and ran into this snag… 
“Local storage domains were found on the same filesystem as / ! Please migrate 
the data to a new LV before upgrading, or you will lose the VMs”.

I’ve always stored my VMs in a separate /DATA directory off the root 
filesystem, and shared the local storage using NFS.  I know it’s not ideal, but 
with good frequent regular backups, it has served me well for many years.

A Google search revealed, that others have also had similar local storage 
issues, and suggestions on ways to mitigate the issue, including  the oVirt 
documentation found here… 
https://www.ovirt.org/documentation/upgrade_guide/index.html#Upgrading_hypervisor_preserving_local_storage_4-3_local_db,
 is not my preferred fix.

In the past node updates with my local storage were not a problem and were easy 
peasy!  From some of the discussions I saw at Red Hat, I have deduced (maybe 
wrongly) that there was an issue that necessitated a fix, which introduced a 
required check for local storage during the upgrade process.

I probably should move all the local storage off the oVirt nodes, but at this 
time, that is easier said than done.

I’m posting here only to see if there are perhaps other ideas or perspectives I 
may not have thought of and should consider with my local storage.

Thank you all in advance, much appreciated, and of course, thank you all for 
supporting oVirt!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YR23KNJSUXH4DSAKTC3KEUKEC7V54VD6/


[ovirt-users] oVirt Nodes 'Setting Host state to Non-Operational' - looking for the cause.

2022-03-16 Thread simon
2 days ago I found that 2 of the 3 oVirt nodes had been set to 
'Non-Operational'. GlusterFS seemed to be ok from the commandline, but the 
oVirt engine WebUI was reporting 2 out of 3 bricks per volume as down and event 
logs were filling up with the following types of messages.


Failed to connect Host ddmovirtprod03 to the Storage Domains data03.
The error message for connection ddmovirtprod03-strg:/data03 returned by VDSM 
was: Problem while trying to mount target
Failed to connect Host ddmovirtprod03 to Storage Serverthe s
Host ddmovirtprod03 cannot access the Storage Domain(s) data03 attached to the 
Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod03 to Storage Pool 


Host ddmovirtprod01 reports about one of the Active Storage Domains as 
Problematic.
Host ddmovirtprod01 cannot access the Storage Domain(s) data03 attached to the 
Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod01 to Storage Pool DDM_Production_DC


The following is from the vdsm.log on host01:

[root@ddmovirtprod01 vdsm]# tail -f /var/log/vdsm/vdsm.log | grep "WARN"
2022-03-15 11:37:14,299+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:data03/.prob-6c101766-4e5d-40c6-8fa8-0f7e3b3e931e',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:24,313+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-c3fa017b-94dc-47d1-89a4-8ee046509a32',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:34,325+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-e173ecac-4d4d-4b59-a437-61eb5d0beb83',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:44,337+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-baf13698-0f43-4672-90a4-86cecdf9f8d0',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:54,350+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-1e92fdfd-d8e9-48b4-84a9-a2b84fc0d14c',
 error: 'Stale file handle' (init_:461)


After trying different methods to resolve without success I did the following.

1. Moved any VM disks using Storage Domain data03 onto other Storage Domains.
2. Placed data03 Storage Domain ionto Maintenance mode.
3. Placed host03 into Maintenance mode, stopping Gluster services and rebooting.
4. Ensuring all Bricks were up, the peers connected and healing started.
5. Once Gluster volumes were healed I activated host03, at which point host01 
also activated.
6. Host01 was showing as disconnected on most bricks so I rebooted it which 
resolved this.
7. I activated Storage Domain data03 without issue.

The system has been left for 24hrs with no further issues.

The issue is now resolved but it would be helful to know what happened to cause 
the issues with the Storage Domain data03 and where do I look to confirm.

Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/55XNGNKOGS3ONWTWDGGJSBORZ2D2MZUT/


[ovirt-users] oVirt nodes keeps updating and rebooting

2022-01-17 Thread Harry O
Hi,

After HE deployment my oVirt nodes keeps updating and rebooting, even though I 
disabled node update in the beginning of deployment wizard.
Why is this and how can I stop it form happening?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LDFICZ2R7WYN6UDD2SBC5PWTMLD7LSNU/


[ovirt-users] oVirt nodes not responding

2019-11-27 Thread Tim Herrmann
Hi everyone,

we have an oVirt-Cluster with 5 nodes and 3 of them provide the storage
with GlusterFS and replica 3.
The cluster is running 87 VMs and has 9TB storage where 4TB is in use.
The version of the oVirt-Engine is 4.1.8.2 and GlusterFS is 3.8.15.
The servers are running in a HP bladecenter and are connected with 10GBit
to each other.

Actually we have some problems that all ovirt nodes periodically won't
respond in the cluster with the following error messages in the oVirt
webinterface:

VDSM glustervirt05 command GetGlusterVolumeHealInfoVDS failed: Message
timeout which can be caused by communication issues
Host glustervirt05 is not responding. It will stay in Connecting state for
a grace period of 68 seconds and after that an attempt to fence the host
will be issued.
Host glustervirt05 does not enforce SELinux. Current status: PERMISSIVE
Executing power management status on Host glustervirt05 using Proxy Host
glustervirt02 and Fence Agent ilo4:xxx.xxx.xxx.xxx.
Manually synced the storage devices from host glustervirt05
Status of host glustervirt05 was set to Up.


In the vdsm logfile I can find the following message:
2019-11-26 11:18:22,909+0100 WARN  (vdsm.Scheduler) [Executor] Worker
blocked:  timeout=60, duration=180
at 0x316a6d0> task#=2859802 at 0x1b70dd0> (executor:351)


And I figured out, that the gluster heal info command takes very long:
[root@glustervirt01 ~]# time gluster volume heal data info
Brick glustervirt01:/gluster/data/brick1
Status: Connected
Number of entries: 0

Brick glustervirt02:/gluster/data/brick1
Status: Connected
Number of entries: 0

Brick glustervirt03:/gluster/data/brick2
Status: Connected
Number of entries: 0


real 3m3.626s
user 0m0.593s
sys 0m0.559s


A strange behavier is also that there is one virtual machine (a postgresql
database) which stops running unexpectedly every one or two days ...
The only thing that has been changed on the vm in the least past was a
resize of the disk.
VM replication-zabbix is down with error. Exit message: Lost connection
with qemu process.

And when we add or delete a larger disk with approximately 100GB in
glusterfs, the glusterfs cluster freaks out won't respond anymore.
This also results in paused VMs ...


Has anyone an idea what could cause such problems?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PT2LS4YLJMPY22IAVFA723HGPJ5PXPAG/


Re: [ovirt-users] Ovirt nodes NFS connection

2018-04-02 Thread Tal Bar-Or
Thanks all for your answer, it's more clear now

On Thu, Mar 22, 2018 at 7:24 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hello Tal
>
> It seems you have a very big overkill on your environment. I would say
> that normally 2 x 10Gb interfaces can do A LOT for nodes with proper
> redundancy. Just creating Vlans you can separate traffic and apply, if
> necessary, QoS per Vlan to guarantee which one is more priority.
>
> If you have 2 x 10Gb in a LACP 802.3ad Aggregation in theory you can do
> 20Gbps of aggregated traffic. If you have 10Gb of constant storage traffic
> it is already huge, so I normally consider that Storage will not go over a
> few Gbps and VMs another few Gb which fit perfectly within even 10Gb
>
> The only exception I would make is if you have a very intensive (and I am
> not talking about IOPS, but throughput) from your storage then may be worth
> to have 2 x 10Gb for Storage and 2 x 10Gb for all other networks
> (Managment, VMs Traffic, Migration(with cap on traffic), etc).
>
> Regards
> Fernando
>
> 2018-03-21 16:41 GMT-03:00 Yaniv Kaul :
>
>>
>>
>> On Wed, Mar 21, 2018 at 12:41 PM, Tal Bar-Or  wrote:
>>
>>> Hello All,
>>>
>>> I am about to deploy a new Ovirt platform, the platform  will consist 4
>>> Ovirt nodes including management, all servers nodes and storage will have
>>> the following config:
>>>
>>> *nodes server*
>>> 4x10G ports network cards
>>> 2x10G will be used for VM network.
>>> 2x10G will be used for storage connection
>>> 2x1Ge 1xGe for nodes management
>>>
>>>
>>> *Storage *4x10G ports network cards
>>> 3 x10G for NFS storage mount Ovirt nodes
>>>
>>> Now given above network configuration layout, what is best practices in
>>> terms of nodes for storage NFS connection, throughput and path resilience
>>> suggested to use
>>> First option each node 2x 10G lacp and on storage side 3x10G lacp?
>>>
>>
>> I'm not sure how you'd get more throughout than you can get in a single
>> physical link. You will get redundancy.
>>
>> Of course, on the storage side you might benefit from multiple bonded
>> interfaces.
>>
>>
>>> The second option creates 3 VLAN's assign each node on that 3 VLAN's
>>> across 2 nic, and on storage, side assigns 3 nice across 3 VLANs?
>>>
>>
>> Interesting - but I assume it'll still stick to a single physical link.
>> Y.
>>
>> Thanks
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Tal Bar-or
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


-- 
Tal Bar-or
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nodes NFS connection

2018-03-22 Thread FERNANDO FREDIANI
Hello Tal

It seems you have a very big overkill on your environment. I would say that
normally 2 x 10Gb interfaces can do A LOT for nodes with proper redundancy.
Just creating Vlans you can separate traffic and apply, if necessary, QoS
per Vlan to guarantee which one is more priority.

If you have 2 x 10Gb in a LACP 802.3ad Aggregation in theory you can do
20Gbps of aggregated traffic. If you have 10Gb of constant storage traffic
it is already huge, so I normally consider that Storage will not go over a
few Gbps and VMs another few Gb which fit perfectly within even 10Gb

The only exception I would make is if you have a very intensive (and I am
not talking about IOPS, but throughput) from your storage then may be worth
to have 2 x 10Gb for Storage and 2 x 10Gb for all other networks
(Managment, VMs Traffic, Migration(with cap on traffic), etc).

Regards
Fernando

2018-03-21 16:41 GMT-03:00 Yaniv Kaul :

>
>
> On Wed, Mar 21, 2018 at 12:41 PM, Tal Bar-Or  wrote:
>
>> Hello All,
>>
>> I am about to deploy a new Ovirt platform, the platform  will consist 4
>> Ovirt nodes including management, all servers nodes and storage will have
>> the following config:
>>
>> *nodes server*
>> 4x10G ports network cards
>> 2x10G will be used for VM network.
>> 2x10G will be used for storage connection
>> 2x1Ge 1xGe for nodes management
>>
>>
>> *Storage *4x10G ports network cards
>> 3 x10G for NFS storage mount Ovirt nodes
>>
>> Now given above network configuration layout, what is best practices in
>> terms of nodes for storage NFS connection, throughput and path resilience
>> suggested to use
>> First option each node 2x 10G lacp and on storage side 3x10G lacp?
>>
>
> I'm not sure how you'd get more throughout than you can get in a single
> physical link. You will get redundancy.
>
> Of course, on the storage side you might benefit from multiple bonded
> interfaces.
>
>
>> The second option creates 3 VLAN's assign each node on that 3 VLAN's
>> across 2 nic, and on storage, side assigns 3 nice across 3 VLANs?
>>
>
> Interesting - but I assume it'll still stick to a single physical link.
> Y.
>
> Thanks
>>
>>
>>
>>
>>
>> --
>> Tal Bar-or
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nodes NFS connection

2018-03-21 Thread Yaniv Kaul
On Wed, Mar 21, 2018 at 12:41 PM, Tal Bar-Or  wrote:

> Hello All,
>
> I am about to deploy a new Ovirt platform, the platform  will consist 4
> Ovirt nodes including management, all servers nodes and storage will have
> the following config:
>
> *nodes server*
> 4x10G ports network cards
> 2x10G will be used for VM network.
> 2x10G will be used for storage connection
> 2x1Ge 1xGe for nodes management
>
>
> *Storage *4x10G ports network cards
> 3 x10G for NFS storage mount Ovirt nodes
>
> Now given above network configuration layout, what is best practices in
> terms of nodes for storage NFS connection, throughput and path resilience
> suggested to use
> First option each node 2x 10G lacp and on storage side 3x10G lacp?
>

I'm not sure how you'd get more throughout than you can get in a single
physical link. You will get redundancy.

Of course, on the storage side you might benefit from multiple bonded
interfaces.


> The second option creates 3 VLAN's assign each node on that 3 VLAN's
> across 2 nic, and on storage, side assigns 3 nice across 3 VLANs?
>

Interesting - but I assume it'll still stick to a single physical link.
Y.

Thanks
>
>
>
>
>
> --
> Tal Bar-or
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt nodes NFS connection

2018-03-21 Thread Tal Bar-Or
Hello All,

I am about to deploy a new Ovirt platform, the platform  will consist 4
Ovirt nodes including management, all servers nodes and storage will have
the following config:

*nodes server*
4x10G ports network cards
2x10G will be used for VM network.
2x10G will be used for storage connection
2x1Ge 1xGe for nodes management


*Storage *4x10G ports network cards
3 x10G for NFS storage mount Ovirt nodes

Now given above network configuration layout, what is best practices in
terms of nodes for storage NFS connection, throughput and path resilience
suggested to use
First option each node 2x 10G lacp and on storage side 3x10G lacp?

The second option creates 3 VLAN's assign each node on that 3 VLAN's across
2 nic, and on storage, side assigns 3 nice across 3 VLANs?
Thanks





-- 
Tal Bar-or
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Nodes Monitoring

2017-03-08 Thread Anantha Raghava

Hi,

You can also monitor by enabling SNMP in Hosts. For this purpose, using 
the MIB, you need to build a Zabbix Template as detailed in the link 
here https://github.com/jensdepuydt/zabbix-ovirt


We are monitoring Version Hosts with CentOS 7 and oVirt 4.0.x using this 
method.


I will share the Zabbix-Template, we have built tomorrow as I, at the 
moment do not have access to my resources.


--

Thanks & Regards,


Anantha Raghava

eXzaTech Consulting And Services Pvt. Ltd.



Thanks Arsène

Just for others benefit just adding the repository and installing the 
agent won't work. Need to add it to SELINUX. For that I used the 
following command:


semodule -i zabbix_agent_setrlimit.pp

Fernando


On 07/03/2017 05:42, Arsène Gschwind wrote:

Hi Fernando,

We do monitor our oVirt hosts using Zabbix and we add the zabbix repo 
to the host so we keep it up to date.


rgds,
Arsène


On 03/06/2017 03:00 PM, FERNANDO FREDIANI wrote:

Hi.

How do you guys monitor your hosts with Zabbix ?
I see the oVirt Nodes have snmpd service installed and could be used 
for basic things but ideally, for Zabbix is good to use its agent.


What would be the best way to install its zabbix-agent package and 
make it persistent ? Add its repository to the oVirt-Node in 
/etc/yum.repos.d/ and install it with yum or install using a .rpm 
directly ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Nodes Monitoring

2017-03-08 Thread FERNANDO FREDIANI

Thanks Arsène

Just for others benefit just adding the repository and installing the 
agent won't work. Need to add it to SELINUX. For that I used the 
following command:


semodule -i zabbix_agent_setrlimit.pp

Fernando


On 07/03/2017 05:42, Arsène Gschwind wrote:

Hi Fernando,

We do monitor our oVirt hosts using Zabbix and we add the zabbix repo 
to the host so we keep it up to date.


rgds,
Arsène


On 03/06/2017 03:00 PM, FERNANDO FREDIANI wrote:

Hi.

How do you guys monitor your hosts with Zabbix ?
I see the oVirt Nodes have snmpd service installed and could be used 
for basic things but ideally, for Zabbix is good to use its agent.


What would be the best way to install its zabbix-agent package and 
make it persistent ? Add its repository to the oVirt-Node in 
/etc/yum.repos.d/ and install it with yum or install using a .rpm 
directly ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Nodes Monitoring

2017-03-07 Thread Arsène Gschwind

Hi Fernando,

We do monitor our oVirt hosts using Zabbix and we add the zabbix repo to 
the host so we keep it up to date.


rgds,
Arsène


On 03/06/2017 03:00 PM, FERNANDO FREDIANI wrote:

Hi.

How do you guys monitor your hosts with Zabbix ?
I see the oVirt Nodes have snmpd service installed and could be used 
for basic things but ideally, for Zabbix is good to use its agent.


What would be the best way to install its zabbix-agent package and 
make it persistent ? Add its repository to the oVirt-Node in 
/etc/yum.repos.d/ and install it with yum or install using a .rpm 
directly ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Nodes Monitoring

2017-03-06 Thread FERNANDO FREDIANI

Hi.

How do you guys monitor your hosts with Zabbix ?
I see the oVirt Nodes have snmpd service installed and could be used for 
basic things but ideally, for Zabbix is good to use its agent.


What would be the best way to install its zabbix-agent package and make 
it persistent ? Add its repository to the oVirt-Node in 
/etc/yum.repos.d/ and install it with yum or install using a .rpm directly ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Nodes - Which OS

2016-09-26 Thread Fabian Deutsch
+1 for CentOS

It's currently probably the best tested and most stable platform when
looking at CentOS and Fedora.

- fabian

On Fri, Sep 23, 2016 at 12:06 PM, Maton, Brett  wrote:
> RHEL / CentOS 7 Are currently recommended, amongst others for Ovirt4.
>
>   You're getting the packages from ovirt's repository not the distro base so
> no need to worry that one distro or another is behind which would most
> likely be the case if you were getting base packages.
>
> On 23 September 2016 at 10:50, Carlos García Gómez
>  wrote:
>>
>> Hello,
>>
>> I was testing  Ovirt inside a VDI proyect.
>> Now, I know there is new version of Ovrit (4.X) and VDI Software so I have
>> decided to remake the installation of all.
>>
>> I wonder which is the best OS platafform for the Host (Ovirt - Nodes).
>> (Fedora / Red Hat Enterprise Linux / CentOS) Now I have CentOS 7 64Bit
>> but... I am not sure.
>>
>> What do you think about it? Fedora or CentOS?
>> Which version? Fedora 19? Fedora 24? CentOS 6? CentOS 7?
>>
>> I always had the feeling that Fedora is a little ahead against CentOS for
>> example oVirt packages.
>>
>> Thank  you
>>
>> Regards,
>>
>> Carlos
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Fabian Deutsch 
RHEV Hypervisor
Red Hat
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Nodes - Which OS

2016-09-23 Thread Maton, Brett
RHEL / CentOS 7 Are currently recommended, amongst others for Ovirt4.

  You're getting the packages from ovirt's repository not the distro base
so no need to worry that one distro or another is behind which would most
likely be the case if you were getting base packages.

On 23 September 2016 at 10:50, Carlos García Gómez <
carlos.gar...@f-integra.org> wrote:

> Hello,
>
> I was testing  Ovirt inside a VDI proyect.
> Now, I know there is new version of Ovrit (4.X) and VDI Software so I have
> decided to remake the installation of all.
>
> I wonder which is the best OS platafform for the *Host (Ovirt - Nodes)*.
> (Fedora / Red Hat Enterprise Linux / CentOS) Now I have CentOS 7 64Bit
> but... I am not sure.
>
> What do you think about it? Fedora or CentOS?
> Which version? Fedora 19? Fedora 24? CentOS 6? CentOS 7?
>
> I always had the feeling that Fedora is a little ahead against CentOS for
> example oVirt packages.
>
> Thank  you
>
> Regards,
>
> Carlos
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt Nodes - Which OS

2016-09-23 Thread Carlos García Gómez
Hello,

I was testing  Ovirt inside a VDI proyect.
Now, I know there is new version of Ovrit (4.X) and VDI Software so I have 
decided to remake the installation of all.

I wonder which is the best OS platafform for the Host (Ovirt - Nodes). (Fedora 
/ Red Hat Enterprise Linux / CentOS) Now I have CentOS 7 64Bit but... I am not 
sure.

What do you think about it? Fedora or CentOS?
Which version? Fedora 19? Fedora 24? CentOS 6? CentOS 7?

I always had the feeling that Fedora is a little ahead against CentOS for 
example oVirt packages.

Thank  you

Regards,

Carlos


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt nodes

2015-12-22 Thread Dominique Taffin
Hello,

 official numbers would also be interesting to me.
I can tell you that we do have some larger infrastructures running
on several engine hosts, one of them for example has 3 datacenters,
7 cluster, 200 hosts. We do have noticeable performance issues on
the engine host. The engine is already running on rather performant
hardware including 12 cores and SSDs - but nevertheless we have load
averages >20 on it and very slow response time from the engine
itself.

best,
 Dominique Taffin

Systemadministrator
IT Operations Platform Core Services

1&1 Internet SE | Brauerstraße 48 | 76135 Karlsruhe | Germany
Phone: +49 721 91374-3912
E-Mail: dominique.taf...@1und1.de | Web: www.1und1.de

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498

Vorstand: Christian Bigatà Joseph, Robert Hoffmann, Hans-Henning
Kettler, Uwe Lamnek
Aufsichtsratsvorsitzender: Michael Scheeren


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte
Informationen enthalten. Wenn Sie nicht der bestimmungsgemäße
Adressat sind oder diese E-Mail irrtümlich erhalten haben,
unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. 
Anderen als dem bestimmungsgemäßen Adressaten ist untersagt, diese
E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche
Weise auch immer zu verwenden.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient of this e-mail, you are hereby
notified that saving, distribution or use of the content of this e-
mail in any way is prohibited. If you have received this e-mail in
error, please notify the sender and delete the e-mail.

-Original Message-
From: Budur Nagaraju 
To: users 
Subject: [ovirt-users] oVirt nodes
Date: Mon, 21 Dec 2015 13:33:21 +0530

HI 


Need info on how many nodes and vms are supported in one single
oVirt engine ?

Thanks,
Nagaraju

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt nodes

2015-12-21 Thread Budur Nagaraju
HI


Need info on how many nodes and vms are supported in one single oVirt
engine ?

Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nodes lose up status

2014-07-23 Thread Fabien CARRE
The versions are :
- oVirt Engine Version: 3.4.2-1.el6
- oVirt Node Hypervisor release 3.0.4 (1.0.201401291204.el6)


On 23 July 2014 18:26, Federico Alberto Sayd  wrote:

>  On 23/07/14 12:19, Fabien CARRE wrote:
>
>   Hello,
> I am experiencing an issue with on my current ovirt DC.
> The setup is 2 machines running Ovit Nodes connected to a LaCie 12big Rack
> Fibre 8, the nodes are connected to both controllers.. This configuration
> has worked fine for a couple of weeks.
>
>  Recently the nodes go from Up status to Non Operational. When I activate
> them back they don't last more than 2 minutes.
>
>
> /var/logl/messages on both nodes shows the same errors :
> Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:0: adding
> target device sdc caused an alignment inconsistency:
> physical_block_size=4096, logical_block_size=512, alignment_offset=170496,
> start=0
> Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:0: adding
> target device sdc caused an alignment inconsistency:
> physical_block_size=4096, logical_block_size=512, alignment_offset=170496,
> start=0
> Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:1: adding
> target device sdd caused an alignment inconsistency:
> physical_block_size=262144, logical_block_size=512, alignment_offset=16384,
> start=0
> Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:1: adding
> target device sdd caused an alignment inconsistency:
> physical_block_size=262144, logical_block_size=512, alignment_offset=16384,
> start=0
>
> Jul 23 15:15:58 localhost vdsm scanDomains WARNING Metadata collection for
> domain path /rhev/data-center/mnt/rhevm:_opt_exports_export
> timedout#012Traceback (most recent call last):#012  File
> "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012
> sd.DOMAIN_META_DATA))#012  File
> "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in
> callCrabRPCFunction#012*args, **kwargs)#012  File
> "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in
> callCrabRPCFunction#012rawLength = self._recvAll(LENGTH_STRUCT_LENGTH,
> timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line
> 150, in _recvAll#012raise Timeout()#012Timeout
> Jul 23 15:16:00 localhost vdsm scanDomains WARNING Metadata collection for
> domain path /rhev/data-center/mnt/rhevm:_opt_exports_export
> timedout#012Traceback (most recent call last):#012  File
> "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012
> sd.DOMAIN_META_DATA))#012  File
> "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in
> callCrabRPCFunction#012*args, **kwargs)#012  File
> "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in
> callCrabRPCFunction#012rawLength = self._recvAll(LENGTH_STRUCT_LENGTH,
> timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line
> 150, in _recvAll#012raise Timeout()#012Timeout
> Jul 23 15:16:09 localhost vdsm scanDomains WARNING Metadata collection for
> domain path /rhev/data-center/mnt/rhevm:_opt_exports_export
> timedout#012Traceback (most recent call last):#012  File
> "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012
> sd.DOMAIN_META_DATA))#012  File
> "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in
> callCrabRPCFunction#012*args, **kwargs)#012  File
> "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in
> callCrabRPCFunction#012rawLength = self._recvAll(LENGTH_STRUCT_LENGTH,
> timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line
> 150, in _recvAll#012raise Timeout()#012Timeout
>
>
>  Has anyone met such a problem ?
>
>  Thank you
>
>  Fabien Carré
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>  Ovirt version?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nodes lose up status

2014-07-23 Thread Federico Alberto Sayd

On 23/07/14 12:19, Fabien CARRE wrote:

Hello,
I am experiencing an issue with on my current ovirt DC.
The setup is 2 machines running Ovit Nodes connected to a LaCie 12big 
Rack Fibre 8, the nodes are connected to both controllers.. This 
configuration has worked fine for a couple of weeks.


Recently the nodes go from Up status to Non Operational. When I 
activate them back they don't last more than 2 minutes.



/var/logl/messages on both nodes shows the same errors :
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:0: adding 
target device sdc caused an alignment inconsistency: 
physical_block_size=4096, logical_block_size=512, 
alignment_offset=170496, start=0
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:0: adding 
target device sdc caused an alignment inconsistency: 
physical_block_size=4096, logical_block_size=512, 
alignment_offset=170496, start=0
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:1: adding 
target device sdd caused an alignment inconsistency: 
physical_block_size=262144, logical_block_size=512, 
alignment_offset=16384, start=0
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:1: adding 
target device sdd caused an alignment inconsistency: 
physical_block_size=262144, logical_block_size=512, 
alignment_offset=16384, start=0


Jul 23 15:15:58 localhost vdsm scanDomains WARNING Metadata collection 
for domain path /rhev/data-center/mnt/rhevm:_opt_exports_export 
timedout#012Traceback (most recent call last):#012  File 
"/usr/share/vdsm/storage/fileSD.py", line 662, in 
collectMetaFiles#012sd.DOMAIN_META_DATA))#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in 
callCrabRPCFunction#012*args, **kwargs)#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in 
callCrabRPCFunction#012rawLength = 
self._recvAll(LENGTH_STRUCT_LENGTH, timeout)#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in 
_recvAll#012raise Timeout()#012Timeout
Jul 23 15:16:00 localhost vdsm scanDomains WARNING Metadata collection 
for domain path /rhev/data-center/mnt/rhevm:_opt_exports_export 
timedout#012Traceback (most recent call last):#012  File 
"/usr/share/vdsm/storage/fileSD.py", line 662, in 
collectMetaFiles#012sd.DOMAIN_META_DATA))#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in 
callCrabRPCFunction#012*args, **kwargs)#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in 
callCrabRPCFunction#012rawLength = 
self._recvAll(LENGTH_STRUCT_LENGTH, timeout)#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in 
_recvAll#012raise Timeout()#012Timeout
Jul 23 15:16:09 localhost vdsm scanDomains WARNING Metadata collection 
for domain path /rhev/data-center/mnt/rhevm:_opt_exports_export 
timedout#012Traceback (most recent call last):#012  File 
"/usr/share/vdsm/storage/fileSD.py", line 662, in 
collectMetaFiles#012sd.DOMAIN_META_DATA))#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in 
callCrabRPCFunction#012*args, **kwargs)#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in 
callCrabRPCFunction#012rawLength = 
self._recvAll(LENGTH_STRUCT_LENGTH, timeout)#012  File 
"/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in 
_recvAll#012raise Timeout()#012Timeout



Has anyone met such a problem ?

Thank you

Fabien Carré


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Ovirt version?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt nodes lose up status

2014-07-23 Thread Fabien CARRE
Hello,
I am experiencing an issue with on my current ovirt DC.
The setup is 2 machines running Ovit Nodes connected to a LaCie 12big Rack
Fibre 8, the nodes are connected to both controllers.. This configuration
has worked fine for a couple of weeks.

Recently the nodes go from Up status to Non Operational. When I activate
them back they don't last more than 2 minutes.


/var/logl/messages on both nodes shows the same errors :
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:0: adding
target device sdc caused an alignment inconsistency:
physical_block_size=4096, logical_block_size=512, alignment_offset=170496,
start=0
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:0: adding
target device sdc caused an alignment inconsistency:
physical_block_size=4096, logical_block_size=512, alignment_offset=170496,
start=0
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:1: adding
target device sdd caused an alignment inconsistency:
physical_block_size=262144, logical_block_size=512, alignment_offset=16384,
start=0
Jul 23 15:14:58 localhost kernel: device-mapper: table: 253:1: adding
target device sdd caused an alignment inconsistency:
physical_block_size=262144, logical_block_size=512, alignment_offset=16384,
start=0

Jul 23 15:15:58 localhost vdsm scanDomains WARNING Metadata collection for
domain path /rhev/data-center/mnt/rhevm:_opt_exports_export
timedout#012Traceback (most recent call last):#012  File
"/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012
sd.DOMAIN_META_DATA))#012  File
"/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in
callCrabRPCFunction#012*args, **kwargs)#012  File
"/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in
callCrabRPCFunction#012rawLength = self._recvAll(LENGTH_STRUCT_LENGTH,
timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line
150, in _recvAll#012raise Timeout()#012Timeout
Jul 23 15:16:00 localhost vdsm scanDomains WARNING Metadata collection for
domain path /rhev/data-center/mnt/rhevm:_opt_exports_export
timedout#012Traceback (most recent call last):#012  File
"/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012
sd.DOMAIN_META_DATA))#012  File
"/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in
callCrabRPCFunction#012*args, **kwargs)#012  File
"/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in
callCrabRPCFunction#012rawLength = self._recvAll(LENGTH_STRUCT_LENGTH,
timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line
150, in _recvAll#012raise Timeout()#012Timeout
Jul 23 15:16:09 localhost vdsm scanDomains WARNING Metadata collection for
domain path /rhev/data-center/mnt/rhevm:_opt_exports_export
timedout#012Traceback (most recent call last):#012  File
"/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012
sd.DOMAIN_META_DATA))#012  File
"/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in
callCrabRPCFunction#012*args, **kwargs)#012  File
"/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in
callCrabRPCFunction#012rawLength = self._recvAll(LENGTH_STRUCT_LENGTH,
timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line
150, in _recvAll#012raise Timeout()#012Timeout


Has anyone met such a problem ?

Thank you

Fabien Carré
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users