[ovirt-users] Re: Rebrandind Problems

2019-04-18 Thread Simone Tiraboschi
On Thu, Apr 18, 2019 at 7:31 PM  wrote:

> hello I made a rebranding of my ovirt 4.3.2, but something went wrong and
> I saved it from the original incorrectly without realizing it. Please I
> need the original "ovirt.brand" and "ovirt" directories when ovirt 4.3.2 is
> installed. Where can I get them to be able to restore them?
>

You can check what you need with something like:
rpm -qf /usr/share/ovirt-engine/brands/ovirt.brand


>
> Greetings
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NDIUV56J5SVQ7WY57V3IJJAOOANAPVR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JWRBJFNSDZAERQ5OKNBOTNVQEGTWSOL3/


[ovirt-users] ovirt with kvm stand alone

2019-04-18 Thread igalvarez
I have 3 KVM stand alone servers with no external storage. My question is, can 
I add these 3 KVM servers to ovirt without impact on VMs already running on KVM 
side?

Thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IEDNMS6PLNI2O3M4GEVNWBN7FFSDNGY/


[ovirt-users] Rebrandind Problems

2019-04-18 Thread siovelrm
hello I made a rebranding of my ovirt 4.3.2, but something went wrong and I 
saved it from the original incorrectly without realizing it. Please I need the 
original "ovirt.brand" and "ovirt" directories when ovirt 4.3.2 is installed. 
Where can I get them to be able to restore them?

Greetings
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NDIUV56J5SVQ7WY57V3IJJAOOANAPVR/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread Strahil
I know 2 approaches.
1. Use NFS hard mounting option - it will never give error to sanlock and it 
will be waiting until NFS is recovered (never tries this one, but in theory 
might work)
2. Change the default sanlock timeout (last time I tried that - it didn't work) 
. You might need help from Sandro or Sahina for that option.

Best Regards,
Strahil NikolovOn Apr 18, 2019 11:45, klaasdem...@gmail.com wrote:
>
> Hi, 
>
> I got a question regarding oVirt and the support of NetApp NFS storage. 
> We have a MetroCluster for our virtual machine disks but a HA-Failover 
> of that (active IP gets assigned to another node) seems to produce 
> outages too long for sanlock to handle - that affects all VMs that have 
> storage leases. NetApp says a "worst case" takeover time is 120 seconds. 
> That would mean sanlock has already killed all VMs. Is anyone familiar 
> with how we could setup oVirt to allow such storage outages? Do I need 
> to use another type of storage for my oVirt VMs because that NFS 
> implementation is unsuitable for oVirt? 
>
>
> Greetings 
>
> Klaas 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HJ7L23WBOGKVBNZJVID5QLLHUJU5AZB3/


[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-04-18 Thread Strahil
Recently , it was discussed in the mailing lists and dev mentioned that 
distributed replicated volumes are not officially supported, but some users use 
them.

Even if not supported, it still should work without issues.If you think not to 
go this way, you can create a new 3 node cluster which will be fully suppported.

Otherwise, if you go towards replicated distributed volumes , you just need  to 
provide  another set of 3 bricks and once added you can rebalance in order to 
distribute the files accross the sets.

Here is an old thread that describes it for replica 2 volume types:
https://lists.gluster.org/pipermail/gluster-users/2011-February/006599.html

I guess I have confused you with my last e-mail, but that was not intentionable.

Best Regards,
Strahil NikolovOn Apr 17, 2019 17:13, adrianquint...@gmail.com wrote:
>
> Hi Strahil, 
> I had a 3 node Hyperconverged setup and added 3 new nodes to the cluster for 
> a total of 6 servers. I am now taking advantage of more compute power, 
> however the gluster storage part is what gets me. 
>
> Current Hyperconverged setup: 
> - host1.mydomain.com 
>   Bricks: 
>     engine 
>     data1 
>     vmstore1 
> - host2.mydomain.com 
>   Bricks: 
>     engine 
>     data1 
>     vmstore1 
> - host3.mydomain.com 
>   Bricks: 
>     engine 
>     data1 
>     vmstore1 
>
> - host4.mydomain.com 
>   Bricks: 
>     
> - host5.mydomain.com 
>   Bricks: 
>     
> - host6.mydomain.com 
>   Bricks: 
>
>
> As you can see from the above, the original first 3 servers are the only ones 
> that contain the gluster storage bricks, so storage redundancy is not set 
> across all 6 nodes. I think it is a lack of understanding from my end on how 
> ovirt and gluster integrate with one another so have a few questions: 
>
> How would I go about achieving storage redundancy across all nodes? 
> Do I need to configure gluster volumes manually through the OS CLI? 
> If I configure the fail storage scenario manually will oVirt know about it? 
>
> Again I know that the bricks must be added in sets of 3 and per the first 3 
> nodes my gluster setup looks like this (all done by hyperconverged seup in 
> ovirt): 
> engine volume:    host1:brick1, host2:brick1, host3:brick1 
> data1 volume:  host1:brick2, host2:brick2, host3:brick2 
> vmstore1 volume:    host1:brick3, host2:brick3, host3:brick3 
>
> So after adding the 3 new servers I dont know if I need to do something 
> similar to the example in 
> https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-62dd6614194e,
>  if I do a similar change will oVirt know about it? will it be able to handle 
> it as hyperconverged? 
>     
> As I mentioned before I normally see 3 node hyperconverged setup examples 
> with gluster but have not found one for 6, 9 or 12 node cluster. 
>
> Thanks again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5T7TCSP4HFB25ZUKYLZVSNKST2NIIJB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CBVQOAXZUXFHZK2LYZZPYSISPTN37YGZ/


[ovirt-users] 4.3.x upgrade and issues with OVN

2019-04-18 Thread Charles Weber
Hi everyone,
We have had a pair of Ovirt clusters at work, starting with the 3.x. I
replaced last year with 4.x cluster on new machines. 4.2 worked great, but
when I upgraded to 4.3.2 and now .3 I immediately ran into host networking
issues resulting in hung or unable totally to migrate VMs.
Configuration notes:
1. I use commercial star.x.x.x SSL cert on engine installed per Ovirt.org
instructions
2. I have 2 untagged NICs assigned to ovirtmgmt and Public, both on the
same IP range network. I also have another set of VLANs tagged on various
NICs. 1 cluster uses HP blades with lots of NICs, the other uses supermicro
1Us with only 2 NICs.
3. I upgraded both engines and hosts to 4.3.2 from 4.2.8. I started seeing
hung migrations cleared by restarting vdsm.
4. All computers involved run current CentOS 7.6
5. Both clusters use iscsi on dedicated tagged storage VLAN, seems fine.
6. I upgraded 1 engine and 1 host to 4.3.3 and things got worse. I have not
updated any hosts or the second engine since.
7. The 4 hosts on the 4.3.3 engine now have out of sync error that refuses
to clear for my Public network.
8. 2 OV 4.3.2 nodes had the following errors with 2 ovn ports.  Here is
example, perhaps related to the genev_sys_6081 error.
ovn-d6eaa1-0: attempting to add tunnel port with same config as port
'ovn-f0f789-0' (::->137.187.160.13
ovn-877214-0: attempting to add tunnel port with same config as port
'ovn-483528-0'


9. I deleted and uninstalled all ovirt related rpms on one node, then did
clean node install using latest 4.3 release. Same errors.
10. I downloaded latest node iso, installed on same host, upgraded to to
4.3.3.1 node and joined cluster. The node installation has no errors and
can migrate to my other hosts. All networks are in sync. The migration
hangs. Restarting VDSMD clears the hung migration.
11. None of the other hosts can migrate VMs to the new node.


Here are excerpts from 3.2 node on 3.3 engine log file.
Apr 17, 2019, 2:43:04 PM
Check for available updates on host BRCVN3 was completed successfully with
message 'ovirt-host, cockpit-ovirt-dashboard, vdsm-client, ovirt-release43,
ovirt-host, ovirt-hosted-engine-setup, vdsm-api, vdsm-jsonrpc,
ovirt-ansible-hosted-engine-setup, ovirt-host-dependencies and 12 others.
To see all packages check engine.log.'.
oVirt
Apr 17, 2019, 2:43:04 PM
Host BRCVN3 has available updates: ovirt-host, cockpit-ovirt-dashboard,
vdsm-client, ovirt-release43, ovirt-host, ovirt-hosted-engine-setup,
vdsm-api, vdsm-jsonrpc, ovirt-ansible-hosted-engine-setup,
ovirt-host-dependencies and 12 others. To see all packages check
engine.log..
oVirt
Apr 17, 2019, 2:27:39 PM
Host BRCVN3's following network(s) are not synchronized with their Logical
Network configuration: Public.
oVirt
Apr 16, 2019, 5:53:05 PM
Failed to sync all host BRCVN3 networks
c1ba631c-7be5-4be0-abbb-a37b6bb7d26d
oVirt
Apr 16, 2019, 5:53:05 PM
(1/1): Failed to apply changes on host BRCVN3. (User: admin@internal-authz)
455a260f
oVirt
Apr 16, 2019, 5:53:05 PM
VDSM BRCVN3 command HostSetupNetworksVDS failed: Bridge Public has
interfaces set([u'vnet19', u'vnet12', u'vnet10', u'vnet11', u'vnet16',
u'vnet17', u'vnet14', u'vnet15', u'vnet0', u'vnet2', u'vnet3', u'vnet4',
u'vnet7', u'vnet8', u'vnet9']) connected
oVirt
Apr 16, 2019, 5:52:59 PM
(1/1): Applying network's changes on host BRCVN3. (User:
admin@internal-authz)
455a260f
oVirt
Apr 16, 2019, 5:32:07 PM
Check for available updates on host BRCVN3 was completed successfully with
message 'no updates found.'.
oVirt
Apr 16, 2019, 5:16:40 PM
Host BRCVN3's following network(s) are not synchronized with their Logical
Network configuration: Public.
oVirt
Apr 16, 2019, 3:33:36 PM
Migration failed (VM: Hwebdev, Source: BRCVN3, Destination: BRCVN4).
bd21f294-0170-47d1-845e-c64c52abbab4
oVirt
Apr 16, 2019, 3:33:36 PM
Migration started (VM: Hwebdev, Source: BRCVN3, Destination: BRCVN4, User:
admin@internal-authz).
bd21f294-0170-47d1-845e-c64c52abbab4
oVirt
Apr 16, 2019, 2:49:36 PM
Migration failed (VM: Hwebdev, Source: BRCVN3, Destination: BRCVN4).
7c04f7a6-8f88-4c64-99f9-e621724fc7ff
oVirt
Apr 16, 2019, 2:49:36 PM
Migration started (VM: Hwebdev, Source: BRCVN3, Destination: BRCVN4, User:
admin@internal-authz).
7c04f7a6-8f88-4c64-99f9-e621724fc7ff
oVirt
Apr 16, 2019, 2:22:03 PM
VDSM BRCVN3 command Get Host Statistics failed: Internal JSON-RPC error:
{'reason': '[Errno 19] genev_sys_6081 is not present in the system'}
oVirt
Apr 16, 2019, 2:11:02 PM
VDSM BRCVN3 command Get Host Statistics failed: Internal JSON-RPC error:
{'reason': '[Errno 19] genev_sys_6081 is not present in the system'}
oVirt
Apr 16, 2019, 2:10:02 PM
VDSM BRCVN3 command Get Host Statistics failed: Internal JSON-RPC error:
{'reason': '[Errno 19] genev_sys_6081 is not present in the system'}
oVirt
Apr 16, 2019, 2:05:46 PM
VDSM BRCVN3 command Get Host Statistics failed: Internal JSON-RPC error:
{'reason': '[Errno 19] genev_sys_6081 is not present in the system'}
oVirt
Apr 16, 2019, 2:03:16 PM
VDSM BRCVN3 command Get

[ovirt-users] Re: oVirt and Ceph iSCSI: separating discovery auth / target auth ?

2019-04-18 Thread Matthias Leopold



Am 15.04.19 um 17:48 schrieb Matthias Leopold:

Hi,

I'm trying to use the Ceph iSCSI gateway with oVirt.
According to my tests with oVirt 4.3.2


...

* you cannot use an iSCSI gateway that has no discovery auth, but uses 
CHAP for targets


This seems to be a problem of the Ceph iSCSI gateway only, I didn't see 
this with a FreeNAS iSCSI appliance. I'll turn to the Ceph folks.


matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GSPYJQRQ22XYZQUMATGDUDZPQRSJ3DZN/


[ovirt-users] Re: Ovirt Host Replacement/Rebuild

2019-04-18 Thread judeelliot2
It required dedicated Linux Admin to get Ovirt working by any stretch of the 
imagination. In any case, presently that it's working. There will be a ton of 
manual reading here to make sense of what settings must be set, what the 
diverse alternatives even mean, far less for what reason you'd need them one 
way. https://www.assignmentland.co.uk/essay-writing-service
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YS2PY4K352ZRRHCRRROGCURKKA2JT6H4/


[ovirt-users] Re: Importing existing GlusterFS

2019-04-18 Thread Zryty ADHD
I think about this but i need internat glusterfs for Openshift and something to 
manage it with graphic/web interface or similiar.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUX4TFQHCOIVBKFG5EZ5OSEO6VRXKPKW/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread Ladislav Humenik
Hi, nope, no storage leases and no fencing at all (because of vlan 
separation between mgmt and RAC).


We have own HA fencing mechanism in place, which will trigger action 
over API when alarm in monitoring is triggered.


HTH

On 18.04.19 13:12, klaasdem...@gmail.com wrote:

Hi,
are you using ovirt storage leases? You'll need them if you want to 
handle a hypervisor completely unresponsive (including fencing 
actions) in a HA setting. Those storage leases use sanlock. If you use 
sanlock a VM gets killed if the lease is not renewable during a very 
short timeframe (60 seconds). That is what is killing the VMs during 
takeover. Before storage leases it seems to have worked because it 
would simply wait long enough for nfs to finish.


Greetings
Klaas

On 18.04.19 12:47, Ladislav Humenik wrote:
Hi, we have netapp nfs with ovirt in production and never experienced 
an outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS 
timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your 
guest VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i 
in /sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; 
done EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems 
to produce outages too long for sanlock to handle - that affects all 
VMs that have storage leases. NetApp says a "worst case" takeover 
time is 120 seconds. That would mean sanlock has already killed all 
VMs. Is anyone familiar with how we could setup oVirt to allow such 
storage outages? Do I need to use another type of storage for my 
oVirt VMs because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/

--
Ladislav Humenik

System administrator / VI




--
Ladislav Humenik

System administrator / VI
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7XBZSHLGDYWIYADMKHFFMWYTXNFHKHWM/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread klaasdemter

Hi,
are you using ovirt storage leases? You'll need them if you want to 
handle a hypervisor completely unresponsive (including fencing actions) 
in a HA setting. Those storage leases use sanlock. If you use sanlock a 
VM gets killed if the lease is not renewable during a very short 
timeframe (60 seconds). That is what is killing the VMs during takeover. 
Before storage leases it seems to have worked because it would simply 
wait long enough for nfs to finish.


Greetings
Klaas

On 18.04.19 12:47, Ladislav Humenik wrote:
Hi, we have netapp nfs with ovirt in production and never experienced 
an outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS 
timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your 
guest VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i 
in /sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; done 
EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems 
to produce outages too long for sanlock to handle - that affects all 
VMs that have storage leases. NetApp says a "worst case" takeover 
time is 120 seconds. That would mean sanlock has already killed all 
VMs. Is anyone familiar with how we could setup oVirt to allow such 
storage outages? Do I need to use another type of storage for my 
oVirt VMs because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/

--
Ladislav Humenik

System administrator / VI


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4PMK33ZQLLH6DCXLC3NNDUQDQJM3XTX/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread klaasdemter

Hi,
are you using ovirt storage leases? You'll need them if you want to 
handle a hypervisor completely unresponsive (including fencing actions) 
in a HA setting. Those storage leases use sanlock. If you use sanlock a 
VM gets killed if the lease is not renewable during a very short 
timeframe (60 seconds). That is what is killing the VMs during takeover. 
Before storage leases it seems to have worked because it would simply 
wait long enough for nfs to finish.


Greetings
Klaas

On 18.04.19 12:47, Ladislav Humenik wrote:
Hi, we have netapp nfs with ovirt in production and never experienced 
an outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS 
timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your 
guest VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i 
in /sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; done 
EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems 
to produce outages too long for sanlock to handle - that affects all 
VMs that have storage leases. NetApp says a "worst case" takeover 
time is 120 seconds. That would mean sanlock has already killed all 
VMs. Is anyone familiar with how we could setup oVirt to allow such 
storage outages? Do I need to use another type of storage for my 
oVirt VMs because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/

--
Ladislav Humenik

System administrator / VI


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFQJNX6TNWTCJERLH3WAO7UGEUPWQ2KI/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread Ladislav Humenik
Hi, we have netapp nfs with ovirt in production and never experienced an 
outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your guest 
VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i in 
/sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; done EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems to 
produce outages too long for sanlock to handle - that affects all VMs 
that have storage leases. NetApp says a "worst case" takeover time is 
120 seconds. That would mean sanlock has already killed all VMs. Is 
anyone familiar with how we could setup oVirt to allow such storage 
outages? Do I need to use another type of storage for my oVirt VMs 
because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/


--
Ladislav Humenik

System administrator / VI

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6TLJ4KH4P5DH2RZFZUUKUYCY6SFQJHSN/


[ovirt-users] oVirt and NetApp NFS storage

2019-04-18 Thread klaasdemter

Hi,

I got a question regarding oVirt and the support of NetApp NFS storage. 
We have a MetroCluster for our virtual machine disks but a HA-Failover 
of that (active IP gets assigned to another node) seems to produce 
outages too long for sanlock to handle - that affects all VMs that have 
storage leases. NetApp says a "worst case" takeover time is 120 seconds. 
That would mean sanlock has already killed all VMs. Is anyone familiar 
with how we could setup oVirt to allow such storage outages? Do I need 
to use another type of storage for my oVirt VMs because that NFS 
implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/


[ovirt-users] what is the best solution for gluster?

2019-04-18 Thread Edoardo Mazza
Hi all,
I have 4 nodes with ovirt and gluster and I must create a new gluster
volume, but I would know which is the best solution to have a high
avaibility, best performance without wasting much space in the disk.
Thanks for  suggestions
Edoardo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5CHK3VIL5KDVAC7BMKB3SCKOWOX6G5B/


[ovirt-users] Re: Changing from thin provisioned to preallocated?

2019-04-18 Thread Karli Sjöberg

On 2019-04-17 13:31, Wesley Stewart wrote:
>  Appreciate the response!
>
> But there is plenty of space available.  About 250 free GBs. My test
> VM is about 40gb.
>
> On Tue, Apr 16, 2019, 1:12 AM Eyal Shenitzky  > wrote:
>
> Hi Wesley,
>
> Currently, there is no direct way to change disks allocation
> policy (thin-provision <-> preallocation).
>
> In your case, it sounds like your ISCSI storage is running out of
> space, changing disk from thin-provisioned to preallocation will
> consume *more* space on the storage, thin-provision is more space
> effective but impact the performance.
>
> On Mon, Apr 15, 2019 at 8:35 PM Wesley Stewart
> mailto:wstewa...@gmail.com>> wrote:
>
> I am currently running a ZFS server (Running RaidZ2) and I
> have been experimenting with NFS and shares to host my
> guests.  I am currently running oVirt 4.2.8 and using a RaidZ2
> NFS mount for the guests.
>
> ZFS definitely is definitely pretty awful (At least in my
> experience so far) for hosting VMs.
>
ZFS is definitely pretty awful at keeping your data _unsafe_ : )

Add a piece of the SSD's as mirrored SLOG and be both safe as fast. My
two cents.

/K

> I believe this is due to the synchronous writes being
> performed.  However, I think running an iSCSI target with
> Synchronization disabled over a 10Gb connection might do the
> trick. (I have a couple of mirroed SSD drives for performance
> if I need it, but the RaidZ2 crawls for disk speed).
>
> When I tried to migrate a thin provisioned guest to iSCSI, I
> keep getting an "Out of disk space error" which I am pretty
> sure is due to the block style storage on the iSCSI target. 
> Is there a way to switch from Thin to Preallocated?  Or is my
> best bet to try and take a snapshot and clone this into a
> pre-allocated disk?
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZDBAGS6Y66SJATBCVDNSLWYTOYIXHJR/
>
>
>
> -- 
> Regards,
> Eyal Shenitzky
>
> On Mon, Apr 15, 2019 at 8:35 PM Wesley Stewart
> mailto:wstewa...@gmail.com>> wrote:
>
> I am currently running a ZFS server (Running RaidZ2) and I
> have been experimenting with NFS and shares to host my
> guests.  I am currently running oVirt 4.2.8 and using a RaidZ2
> NFS mount for the guests.
>
> ZFS definitely is definitely pretty awful (At least in my
> experience so far) for hosting VMs.  I believe this is due to
> the synchronous writes being performed.  However, I think
> running an iSCSI target with Synchronization disabled over a
> 10Gb connection might do the trick. (I have a couple of
> mirroed SSD drives for performance if I need it, but the
> RaidZ2 crawls for disk speed).
>
> When I tried to migrate a thin provisioned guest to iSCSI, I
> keep getting an "Out of disk space error" which I am pretty
> sure is due to the block style storage on the iSCSI target. 
> Is there a way to switch from Thin to Preallocated?  Or is my
> best bet to try and take a snapshot and clone this into a
> pre-allocated disk?
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZDBAGS6Y66SJATBCVDNSLWYTOYIXHJR/
>
>
>
> -- 
> Regards,
> Eyal Shenitzky
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQCOAS5NJKZ3XRIHGKDD75BVMTI5RCNG/


pEpkey.asc
Description: application/pgp-keys
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovi