[ovirt-users] Re: hosted-engine and GlusterFS on Vlan help

2019-05-14 Thread Hanson
Running iperf3 between node1 & node2, I can achieve almost 10gbps 
without ever going out to the gateway...


So switching between port to port on the switch is working properly on 
the vlan.


This must be a problem in the gluster settings? Where do I start 
troubleshooting here?



On 10/04/2016 10:38 AM, Hanson wrote:

Hi Guys,

I've converted my lab from using 802.3ad with bonding>bridged vlans to 
one link with two vlan bridges and am now having traffic jumping to 
the gateway when moving VM's/ISO/etc.


802.3ad = node1>switch1>node2
801.1q = node1>switch1>gateway>switch1>node2

I assume I've setup the same vlan style, though this time I used the 
gui on the initial host install... setting up the vlans with their 
parent being eth0.


Hosted-engine on deploy then creates ovirtmgmt on top of eth0.11 ...

Switch is tagged for vlans 10 & 11. Including a PVID of 11 for good 
measure. (Gluster is vlan 11)


I'd expect the traffic from node to node to be going from port to port 
like it did in 802.3ad, what have I done wrong or is it using the gui 
initially?


This is how the current setup looks:

/var/lib/vdsm/Persistent/netconf/nets/ovirtmgmt:
{
"ipv6autoconf": false,
"nameservers": [],
"nic": "eth0",
"vlan": 11,
"ipaddr": "10.0.3.11",
"switch": "legacy",
"mtu": 1500,
"netmask": "255.255.255.0",
"dhcpv6": false,
"stp": false,
"bridged": true,
"gateway": "10.0.3.1",
"defaultRoute": true
}

/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt:
# Generated by VDSM version 4.18.13-1.el7.centos
DEVICE=ovirtmgmt
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=10.0.3.11
NETMASK=255.255.255.0
GATEWAY=10.0.3.1
BOOTPROTO=none
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=no
VLAN_ID=11
MTU=1500

Thanks!!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKKWE4RKO5J7RJTLKYRKPFZKKACSA55I/


[ovirt-users] hosted-engine and GlusterFS on Vlan help

2019-05-14 Thread Hanson

Hi Guys,

I've converted my lab from using 802.3ad with bonding>bridged vlans to 
one link with two vlan bridges and am now having traffic jumping to the 
gateway when moving VM's/ISO/etc.


802.3ad = node1>switch1>node2
801.1q = node1>switch1>gateway>switch1>node2

I assume I've setup the same vlan style, though this time I used the gui 
on the initial host install... setting up the vlans with their parent 
being eth0.


Hosted-engine on deploy then creates ovirtmgmt on top of eth0.11 ...

Switch is tagged for vlans 10 & 11. Including a PVID of 11 for good 
measure. (Gluster is vlan 11)


I'd expect the traffic from node to node to be going from port to port 
like it did in 802.3ad, what have I done wrong or is it using the gui 
initially?


This is how the current setup looks:

/var/lib/vdsm/Persistent/netconf/nets/ovirtmgmt:
{
"ipv6autoconf": false,
"nameservers": [],
"nic": "eth0",
"vlan": 11,
"ipaddr": "10.0.3.11",
"switch": "legacy",
"mtu": 1500,
"netmask": "255.255.255.0",
"dhcpv6": false,
"stp": false,
"bridged": true,
"gateway": "10.0.3.1",
"defaultRoute": true
}

/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt:
# Generated by VDSM version 4.18.13-1.el7.centos
DEVICE=ovirtmgmt
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=10.0.3.11
NETMASK=255.255.255.0
GATEWAY=10.0.3.1
BOOTPROTO=none
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=no
VLAN_ID=11
MTU=1500

Thanks!!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IV23EDGWLQD33AMM5Y3H2PFO2CCNE7X6/


[ovirt-users] Re: HE + Gluster : Engine corrupted?

2018-07-03 Thread Hanson Turner

Hi Ravishankar,

This doesn't look like split-brain...

[root@ovirtnode1 ~]# gluster volume heal engine info
Brick ovirtnode1:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick ovirtnode3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick ovirtnode4:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

[root@ovirtnode1 ~]# gluster volume heal engine info split-brain
Brick ovirtnode1:/gluster_bricks/engine/engine
Status: Connected
Number of entries in split-brain: 0

Brick ovirtnode3:/gluster_bricks/engine/engine
Status: Connected
Number of entries in split-brain: 0

Brick ovirtnode4:/gluster_bricks/engine/engine
Status: Connected
Number of entries in split-brain: 0

[root@ovirtnode1 ~]# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: c8dc1b04-bc25-4e97-81bb-4d94929918b1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirtnode1:/gluster_bricks/engine/engine
Brick2: ovirtnode3:/gluster_bricks/engine/engine
Brick3: ovirtnode4:/gluster_bricks/engine/engine

Thanks,

Hanson


On 07/02/2018 07:09 AM, Ravishankar N wrote:




On 07/02/2018 02:15 PM, Krutika Dhananjay wrote:

Hi,

So it seems some of the files in the volume have mismatching gfids. I 
see the following logs from 15th June, ~8pm EDT:



...
...
[2018-06-16 04:00:10.264690] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.


You can use 
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ 
(see 3. Resolution of split-brain using gluster CLI).
Nit: The doc says in the beginning that gfid split-brain cannot be 
fixed automatically but newer releases do support it, so the methods 
in section 3 should work to solve gfid split-brains.


[2018-06-16 04:00:10.265861] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4411: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:11.522600] E [MSGID: 108008] 
[afr-self-heal-common.c:212:afr_gfid_split_brain_source] 
0-engine-replicate-0: All the bricks should be up to resolve the gfid 
split barin

This is a concern. For the commands to work, all 3 bricks must be online.
Thanks,
Ravi
[2018-06-16 04:00:11.522632] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:11.523750] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4493: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:12.864393] E [MSGID: 108008] 
[afr-self-heal-common.c:212:afr_gfid_split_brain_source] 
0-engine-replicate-0: All the bricks should be up to resolve the gfid 
split barin
[2018-06-16 04:00:12.864426] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:12.865392] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4575: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:18.716007] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4657: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:20.553365] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4739: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:21.771698] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4821: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:23.871647] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4906: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:25.034780] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4987: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)

...
...


Adding Ravi who works on replicate component to hep resolve the 
mismatches.


-Krutika


On Mon, Jul 2, 2018 at 12:27 PM, Krutika Dhananjay 
mailto:kdhan...@redhat.com>> wrote:


Hi,

Sorry, I was out sick on Friday. I am looking into the logs. Will
get 

[ovirt-users] HE + Gluster : Engine corrupted?

2018-06-20 Thread Hanson Turner

Hi Benny,

Who should I be reaching out to for help with a gluster based hosted 
engine corruption?



--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirtnode1.abcxyzdomains.net
Host ID    : 1
Engine status  : {"reason": "failed liveliness 
check", "health": "bad", "vm": "up", "detail": "Up"}

Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 92254a68
local_conf_timestamp   : 115910
Host timestamp : 115910
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=115910 (Mon Jun 18 09:43:20 2018)
    host-id=1
    score=3400
    vm_conf_refresh_time=115910 (Mon Jun 18 09:43:20 2018)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False


My when I VNC into my HE, All I get is:
Probing EDD (edd=off to disable)... ok


So, that's why it's failing the liveliness check... I cannot get the 
screen on HE to change short of ctl-alt-del which will reboot the HE.

I do have backups for the HE that are/were run on a nightly basis.

If the cluster was left alone, the HE vm would bounce from machine to 
machine trying to boot. This is why the cluster is in maintenance mode.
One of the nodes was down for a period of time and brought back, 
sometime through the night, which is when the automated backup kicks, 
the HE started bouncing around. Got nearly 1000 emails.


This seems to be the same error (but may not be the same cause) as 
listed here:

https://bugzilla.redhat.com/show_bug.cgi?id=1569827

Thanks,

Hanson

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NLA2URX3KN44FGFUVV4N5EJBPICABHH/


[ovirt-users] Re: Ovirt + Gluster : How do I gain access to the file systems of the VMs

2018-06-19 Thread Hanson Turner

Hi Guys,

I've an answer... Here's how I did it...

First, I needed kpartx ... so

#apt-get install kpartx

Then setup a loopback device for the raw hdd image

#losetup /dev/loop4 [IMAGE FILE]

#kpartx -a /dev/loop4

This allowed me to mount the various partitions included in the VM. 
There you can modify the configs, make backups etc.


Thanks,

Hanson


On 06/19/2018 09:31 AM, Hanson Turner wrote:


Hi Sahina,

Thanks for your reply, I can copy the files off without issue. Using 
either a remote mount gluster, or just use the node and scp the files 
to where I want them.


I was asking how to/do I mount the VM's disk in a way to be able to 
pull/modify files that are on the HDD of the VM.


Thanks,

Hason


On 06/19/2018 05:02 AM, Sahina Bose wrote:



On Mon, Jun 18, 2018 at 5:12 PM, Hanson Turner 
mailto:han...@andrewswireless.net>> wrote:


Hi Guys,

My engine has corrupted, and while waiting for help, I'd like to
see if I can pull some data off the VM's to re purpose back onto
dedicated hardware.

Our setup is/was a gluster based storage system for VM's. The
gluster data storage I'm assuming is okay, I think the hosted
engine is hosed, and needs restored, but that's another thread.

I can copy the raw disk file off of the gluster data domain.
What's the best way to mount it short of importing it into
another gluster domain?

With vmware, we can grab the disk file and move it from server to
server without issue. You can mount and explore contents with
workstation.


If you want to copy the image file. you can mount the gluster volume 
and copy it.

using mount -t glusterfs :/ 


What do we have available to us for ovirt?

Thanks,

___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
<https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/>






___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZBEM7WJ5SGRIQLC53GKZSFXYMEOXLRW/


[ovirt-users] Ovirt + Gluster : How do I gain access to the file systems of the VMs

2018-06-18 Thread Hanson Turner

Hi Guys,

My engine has corrupted, and while waiting for help, I'd like to see if 
I can pull some data off the VM's to re purpose back onto dedicated 
hardware.


Our setup is/was a gluster based storage system for VM's. The gluster 
data storage I'm assuming is okay, I think the hosted engine is hosed, 
and needs restored, but that's another thread.


I can copy the raw disk file off of the gluster data domain. What's the 
best way to mount it short of importing it into another gluster domain?


With vmware, we can grab the disk file and move it from server to server 
without issue. You can mount and explore contents with workstation.


What do we have available to us for ovirt?

Thanks,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/


[ovirt-users] Re: Gluster not synicng changes between nodes for engine

2018-06-18 Thread Hanson Turner

Ok,

So removing one downed node cleared all the non syncing issues.

In the mean time, when that one node was coming back, it seems to have 
corrupted the hosted-engine vm.


Remote-Viewer nodeip:5900, the console shows:

Probing EDD (edd=off to disable)... ok


Doesn't matter which of the three remaining nodes try to launch the 
engine, the engine comes up the same.


Had to set cluster to global maintenance, as the engine will keep trying 
to start off different nodes.


I do have backups run nightly so I can restore engine vm, however, I 
don't see a straight forward method of restoring the engine vm in a 
hosted-engine gluster setup.



Can any of the redhat boys help?


Here's the hosted-engine --vm-status

--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirtnode1.abcxyzdomains.net
Host ID    : 1
Engine status  : {"reason": "failed liveliness 
check", "health": "bad", "vm": "up", "detail": "Up"}

Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 92254a68
local_conf_timestamp   : 115910
Host timestamp : 115910
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=115910 (Mon Jun 18 09:43:20 2018)
    host-id=1
    score=3400
    vm_conf_refresh_time=115910 (Mon Jun 18 09:43:20 2018)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False

---clipped---




On 06/16/2018 02:23 PM, Hanson Turner wrote:

Hi Guys,

I've got 60 some odd files for each of the nodes in the cluster, they 
don't seem to be syncing.


Running a volume heal engine full, reports successful. Running volume 
heal engine info reports the same files, and doesn't seem to be syncing.


Running a volume heal engine info split-brain, there's nothing listed 
in split-brain.


Peers show as connected. Gluster volumes are started/up.

Hosted-engine --vm-status reports :
The hosted engine configuration has not been retrieved from shared 
storage. Please ensure that ovirt-ha-agent is running and the storage 
server is reachable.


This is leaving the cluster in an engine down with all vm's down state...

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YPNWM222K2U7NX32CIME7KINWPCLBSCR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DYWVSXDP3BZGV5XKBZS3RTYN4H6OZVRR/


[ovirt-users] Gluster not synicng changes between nodes for engine

2018-06-16 Thread Hanson Turner

Hi Guys,

I've got 60 some odd files for each of the nodes in the cluster, they 
don't seem to be syncing.


Running a volume heal engine full, reports successful. Running volume 
heal engine info reports the same files, and doesn't seem to be syncing.


Running a volume heal engine info split-brain, there's nothing listed in 
split-brain.


Peers show as connected. Gluster volumes are started/up.

Hosted-engine --vm-status reports :
The hosted engine configuration has not been retrieved from shared 
storage. Please ensure that ovirt-ha-agent is running and the storage 
server is reachable.


This is leaving the cluster in an engine down with all vm's down state...

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YPNWM222K2U7NX32CIME7KINWPCLBSCR/


[ovirt-users] Grubx64.efi missing from boot partition

2018-06-12 Thread Hanson Turner

Hi Guys,

We went to physically move a node and have found the node will not boot 
successfully anymore.


The error coming up is :failed to open/ file not found:

\EFI\BOOT\grubx64.efi


This file is found in \EFI\centos\grubx64.efi.

I have copied it to \EFI\BOOT\ and got the machine to boot, however, it 
has no working networking.


Modifying /etc/sysconfig/ifcfg-xyzabz123 appropriately restored 
networking, however, the node is not able to be resumed/activated.


This is probably because the blade reports the following interfaces:

eno1, eno2, eno3, eno4

When originally deployed, they were eno1,eno2, eth0, eth1.

The engine still sees the host as having the eno+eth combo.

Any ideas guys?

Thanks!


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R7QVPQDVBKSVIHA5KZF4AJ63R5BMGJEV/


Re: [ovirt-users] HostedEngine with HA

2018-03-22 Thread Hanson Turner

Hi Carlos,

If you're using shared storage across the nodes/hypervisors, the 
HostedEngine VM should already be HA.


Sometimes this allows the engine to drop while restarting it on another 
node. Usually when this happens the rest of the nodes running VMs stay 
up, and things resync when the downed node comes back.


IE the only one to lose pings is the Hosted-Engine. Unless ofcourse 
there were VM's on the same node, in which case, if they were HA VMs 
they will be restarted/resumed depending on your settings.



Thanks,

Hanson


On 08/17/2016 05:06 AM, Carlos Rodrigues wrote:

Anyone can help me to build HA on HostedEngine VM?

How can i guarantee that if host with HostedEngine VM goes down, the
HostedEngine VM moves to another host?

Regards,
Carlos Rodrigues

On Tue, 2016-08-16 at 11:53 +0100, Carlos Rodrigues wrote:

On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:



On 12 August 2016 at 20:23, Carlos Rodrigues <c...@eurotux.com>
wrote:

Hello,

I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.

When i shutdown the network of host with HostedEngine VM,  it
should be
possible the HostedEngine VM migrate automatically to another
host?


migrate on which network?
  

What is the expected behaviour on this HA scenario?

After a few minutes your vm will be shutdown by the High
Availability
agent, as it can't see network, and started on another host.


I'm testing this scenario and after shutdown network, it should be
expected that agent shutdown ha and started on another host, but
after
couple minutes nothing happens and on host with network we getting
the
following messages:

Aug 16 11:44:08 ied-blade11.install.eurotux.local ovirt-ha-
agent[2779]:
ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf

I think the HA agent its trying to get vm configuration but some how
it
can't get vm.conf to start VM.

Regards,
Carlos Rodrigues




Regards,

--
Carlos Rodrigues

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1)

2018-03-21 Thread Hanson Turner

Hi Sahina,

On the fourth node, I've found 
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirtnode1.core\:_engine.log 
... is this the engine.log you're referring to or do you want one from 
the hosted engine?


I actually do want to go replica 5. Most VM's it runs are small(1 
Core,1gb Ram,8gb HDD) and HA is needed. I'd like a bigger critical 
margin than one node failing.


As far as the repos, it's a straight the ovirtnode iso install, I think 
it's Node 4.2.0... which is yum updated to 4.2.1.1
When I installed 4.0 I'd installed on top of centos. This round I went 
straight with the node os because of simplicity in updating.


I can manually restart gluster from cli, the peer and volume status show 
no peers or volumes.


One thing of note, the networking is still as setup from the node 
install. I cannot change the networking info from the ovirt 
gui/dashboard. The host goes unresponsive and then another host power 
cycles it.


Thanks,
Hanson

On 03/21/2018 06:12 AM, Sahina Bose wrote:



On Tue, Mar 20, 2018 at 9:41 PM, Hanson Turner 
<han...@andrewswireless.net <mailto:han...@andrewswireless.net>> wrote:


Hi Guys,

I've a 3 machine pool running gluster with replica 3 and want to
add two more machines.

This would change to a replica 5...


Adding 2 more nodes to cluster will not change it to a replica 5. 
replica 3 is a configuration on the gluster volume. I assume you don't 
need a replica 5, but just to add more nodes (and possibly new gluster 
volumes) to the cluster?



In ovirt 4.0, I'd done everything manually. No problem there.

In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks
like the fourth node has been added to the pool but will not go
active. It complains gluster isn't running (which I've not
manually configured /dev/sdb for gluster). Host install+deploy
fails. Host can go into maintenance w/o issue. (Meaning the host
has been added to the cluster, but isn't operational)


Are the repos configured correctly on the new nodes? Does the oVirt 
cluster where the nodes are being added have "Enable Gluster Service" 
enabled?



What do I need to do to get the node up and running proper with
gluster syncing properly? Manually restarting gluster, tells me
there's no peers and no volumes.

Do we have a wizard for this too? Or do I need to go find the
setup scripts and configure hosts 4 + 5 manually and run the
deploy again?


The host addition flow should take care of installing gluster.
Can you share the engine log from when the host was added to when it's 
reported non-operational?




    Thanks,

Hanson

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1)

2018-03-20 Thread Hanson Turner

Hi Guys,

I've a 3 machine pool running gluster with replica 3 and want to add two 
more machines.


This would change to a replica 5...

In ovirt 4.0, I'd done everything manually. No problem there.

In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks like 
the fourth node has been added to the pool but will not go active. It 
complains gluster isn't running (which I've not manually configured 
/dev/sdb for gluster). Host install+deploy fails. Host can go into 
maintenance w/o issue. (Meaning the host has been added to the cluster, 
but isn't operational)


What do I need to do to get the node up and running proper with gluster 
syncing properly? Manually restarting gluster, tells me there's no peers 
and no volumes.


Do we have a wizard for this too? Or do I need to go find the setup 
scripts and configure hosts 4 + 5 manually and run the deploy again?



Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Official Hyperconverged Gluster oVirt upgrade procedure?

2018-01-23 Thread Hanson Turner

Hi Guys,

What the official upgrade procedure now? Has anything changed?

What's the upgrade path from 4.0.4 to current stable? 
Engine-upgrade-check returns no upgrades needed, and that's probably 
correct, as it looks like the repos are from 4.0


Tying yum update on node/hosts fails on gluster, and I'm leary on update 
the repo to pull the newer gluster packages because of potential issues 
of the other two hosts being on older glusters.


Thanks!


On 01/26/2017 08:26 AM, Simone Tiraboschi wrote:



On Thu, Jan 26, 2017 at 10:16 AM, Ralf Schenk <r...@databay.de 
<mailto:r...@databay.de>> wrote:


Hello,

i would appreciate any hint, too. I'm on 4.0.6 on Centos 7.3 since
yesterday but I'm frightened what I need to do to upgrade and be
able to manage gluster from GUI then.


1. Setting global maintenance mode,
2. upgrading the engine on the engine VM as for a regular engine,
3. exit the global maintenance mode
4. upgrade the host (once at time!!!) from the engine

should be enough

Bye


Am 25.01.2017 um 21:32 schrieb Hanson:

Hi Guys,

Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt
gluster pool?

Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine
running in a gluster storage domain.

Put node in maintenance mode and disable glusterfs from ovirt
gui, run yum update?

Thanks!

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


-- 



*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70 <tel:+49%202405%20408370>
fax +49 (0) 24 05 / 40 83 759 <tel:+49%202405%204083759>
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
Dipl.-Kfm. Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Official Hyperconverged Gluster oVirt upgrade procedure?

2018-01-17 Thread Hanson Turner

Hi Guys,

What the official upgrade procedure now? Has anything changed?

What's the upgrade path from 4.0.4 to current stable? 
Engine-upgrade-check returns no upgrades needed, and that's probably 
correct, as it looks like the repos are from 4.0


Thanks!


On 01/26/2017 08:26 AM, Simone Tiraboschi wrote:



On Thu, Jan 26, 2017 at 10:16 AM, Ralf Schenk <r...@databay.de 
<mailto:r...@databay.de>> wrote:


Hello,

i would appreciate any hint, too. I'm on 4.0.6 on Centos 7.3 since
yesterday but I'm frightened what I need to do to upgrade and be
able to manage gluster from GUI then.


1. Setting global maintenance mode,
2. upgrading the engine on the engine VM as for a regular engine,
3. exit the global maintenance mode
4. upgrade the host (once at time!!!) from the engine

should be enough

Bye


Am 25.01.2017 um 21:32 schrieb Hanson:

Hi Guys,

Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt
gluster pool?

Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine
running in a gluster storage domain.

Put node in maintenance mode and disable glusterfs from ovirt
gui, run yum update?

Thanks!

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


-- 



*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70 <tel:+49%202405%20408370>
fax +49 (0) 24 05 / 40 83 759 <tel:+49%202405%204083759>
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
Dipl.-Kfm. Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Official Hyperconverged Gluster oVirt upgrade procedure?

2017-01-25 Thread Hanson

Hi Guys,

Just wondering if we have an updated manual or whats the current 
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?


Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running 
in a gluster storage domain.


Put node in maintenance mode and disable glusterfs from ovirt gui, run 
yum update?


Thanks!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine and GlusterFS on Vlan help

2016-10-04 Thread Hanson
Running iperf3 between node1 & node2, I can achieve almost 10gbps 
without ever going out to the gateway...


So switching between port to port on the switch is working properly on 
the vlan.


This must be a problem in the gluster settings? Where do I start 
troubleshooting here?



On 10/04/2016 10:38 AM, Hanson wrote:

Hi Guys,

I've converted my lab from using 802.3ad with bonding>bridged vlans to 
one link with two vlan bridges and am now having traffic jumping to 
the gateway when moving VM's/ISO/etc.


802.3ad = node1>switch1>node2
801.1q = node1>switch1>gateway>switch1>node2

I assume I've setup the same vlan style, though this time I used the 
gui on the initial host install... setting up the vlans with their 
parent being eth0.


Hosted-engine on deploy then creates ovirtmgmt on top of eth0.11 ...

Switch is tagged for vlans 10 & 11. Including a PVID of 11 for good 
measure. (Gluster is vlan 11)


I'd expect the traffic from node to node to be going from port to port 
like it did in 802.3ad, what have I done wrong or is it using the gui 
initially?


This is how the current setup looks:

/var/lib/vdsm/Persistent/netconf/nets/ovirtmgmt:
{
"ipv6autoconf": false,
"nameservers": [],
"nic": "eth0",
"vlan": 11,
"ipaddr": "10.0.3.11",
"switch": "legacy",
"mtu": 1500,
"netmask": "255.255.255.0",
"dhcpv6": false,
"stp": false,
"bridged": true,
"gateway": "10.0.3.1",
"defaultRoute": true
}

/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt:
# Generated by VDSM version 4.18.13-1.el7.centos
DEVICE=ovirtmgmt
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=10.0.3.11
NETMASK=255.255.255.0
GATEWAY=10.0.3.1
BOOTPROTO=none
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=no
VLAN_ID=11
MTU=1500

Thanks!!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine and GlusterFS on Vlan help

2016-10-04 Thread Hanson

Hi Guys,

I've converted my lab from using 802.3ad with bonding>bridged vlans to 
one link with two vlan bridges and am now having traffic jumping to the 
gateway when moving VM's/ISO/etc.


802.3ad = node1>switch1>node2
801.1q = node1>switch1>gateway>switch1>node2

I assume I've setup the same vlan style, though this time I used the gui 
on the initial host install... setting up the vlans with their parent 
being eth0.


Hosted-engine on deploy then creates ovirtmgmt on top of eth0.11 ...

Switch is tagged for vlans 10 & 11. Including a PVID of 11 for good 
measure. (Gluster is vlan 11)


I'd expect the traffic from node to node to be going from port to port 
like it did in 802.3ad, what have I done wrong or is it using the gui 
initially?


This is how the current setup looks:

/var/lib/vdsm/Persistent/netconf/nets/ovirtmgmt:
{
"ipv6autoconf": false,
"nameservers": [],
"nic": "eth0",
"vlan": 11,
"ipaddr": "10.0.3.11",
"switch": "legacy",
"mtu": 1500,
"netmask": "255.255.255.0",
"dhcpv6": false,
"stp": false,
"bridged": true,
"gateway": "10.0.3.1",
"defaultRoute": true
}

/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt:
# Generated by VDSM version 4.18.13-1.el7.centos
DEVICE=ovirtmgmt
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=10.0.3.11
NETMASK=255.255.255.0
GATEWAY=10.0.3.1
BOOTPROTO=none
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=no
VLAN_ID=11
MTU=1500

Thanks!!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Gluster Hyperconverged problem

2016-09-19 Thread Hanson

Hi Guys,

I encountered an unfortunate circumstance today. Possibly an achillies 
heel.


I have three hypervisors, HV1, HV2, HV3, all running gluster for hosted 
engine support. Individually they all pointed to HV1:/hosted_engine with 
backupvol=HV2,HV3...


HV1 lost its bootsector, which was discovered upon a reboot. This had 
zero impact, as designed, on the VM's.


However, now that HV1 is down, how does one go about replacing the 
original HV? The backup servers point to HV1, and you cannot readd the 
HV through the GUI, and the CLI will not readd it as it's already 
there... you cannot remove it as it is down in the GUI...


Pointing the other HV's to their own storage may make sense for multiple 
instances of the hosted_engine, however it's nice that the gluster 
volumes are replicated and that one VM can be relaunched when a HV error 
is detected. It's also consuming less resources.



What's the procedure to replace the original VM?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] BSD Guests + oVirt

2016-09-08 Thread Hanson

Hi Guys,


Is there any optimizations for using FreeBSD 10.3 with ovirt?

The guest OS works fine, however, the engine frequently lists the wrong 
status for the VM, like rebooting, booting etc.


ovirt-guest-tools doesn't support freebsd from what I can find.


Thanks!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to manually restore backup

2016-08-29 Thread Hanson

Hi Guys,

Just wondering what the proper way to restore a backup to a hosted-engine?

I've tried doing the deploy, then cleanup, and backup --mode=restore, 
but then engine-setup needs internet access. (which it doesn't have)


Is there a way to restore the backup over the current data of a 
currently deployed new host?


ie like a --mode=restore --option=force ?

or is there another way to restore?


Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine NIC setup files

2016-08-19 Thread Hanson

My mistake. Needed NM_CONTROLLED=no added.


On 08/19/2016 12:58 PM, Hanson wrote:

Hi Guys,

I have edited /etc/sysconfig/network-scripts/ifcfg-eth0 &1 for the 
various subnets we needed.


Somewhere along the line, when the hosted-engine boots, eth1 comes up 
but eth0 does not. If I login using the upped interface and do an ifup 
eth0 it comes up.


it is set to ONBOOT=yes in the config.

I know with the nodes, that these files are overwritten on boot. Where 
should I be editing for the hosted-engine?



Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HostedEngine NIC setup files

2016-08-19 Thread Hanson

Hi Guys,

I have edited /etc/sysconfig/network-scripts/ifcfg-eth0 &1 for the 
various subnets we needed.


Somewhere along the line, when the hosted-engine boots, eth1 comes up 
but eth0 does not. If I login using the upped interface and do an ifup 
eth0 it comes up.


it is set to ONBOOT=yes in the config.

I know with the nodes, that these files are overwritten on boot. Where 
should I be editing for the hosted-engine?



Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrade hosts/nodes from engine

2016-08-16 Thread Hanson

Hi Guys,

Quick question, I have my nodes on a bond-bridge-privateVlan setup, and 
my engine on a bond-bridge-publicVlan setup for remote monitoring.


Understandably, the nodes are complaining that they are failing updates. 
(They're on a private vlan, and only configured with IP's in that vlan, 
the public vlan doesn't have IP's set on the hosts so they can pass it 
to VMs).


Is there a way to have the engine do the updates on the node using its 
internet connection, like a proxy?


For security reasons I like to have the nodes not publicly accessible, 
as we see hundreds if not thousands of ssh attempts, and root would 
probably be the most attacked account.


Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt + Gluster Hyperconverged

2016-07-18 Thread Hanson Turner

Hi Fernando,

Not anything spectacular that I have seen, but I'm using 16GB minimum 
each node.


Probably want to setup your hosted-engine as 2cpu, 4096mb ram. I believe 
that's the min reqs.


Thanks,

Hanson


On 07/15/2016 09:48 AM, Fernando Frediani wrote:

Hi folks,

I have a few servers with reasonable amount of raw storage but they 
are 3 with only 8GB of memory each.
I wanted to have them with an oVirt Hyperconverged + Gluster mainly to 
take advantage of the amount of the storage spread between them and 
have ability to live migrate VMs.


Question is: Does running Gluster on the same Hypervisor nodes 
consumes any significant memory that won't be much left for running VMs ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
-
- Network Engineer  -
-
-  Andrews Wireless -
- 671 Durham road 21-
-Uxbridge ON, L9P 1R4   -
-P: 905.852.8896-
-F: 905.852.7587-
- Toll free  (877)852.8896  -
-

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users