see hundreds if not thousands of ssh attempts, and root would
probably be the most attacked account.
Thanks,
Hanson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
s in the config.
I know with the nodes, that these files are overwritten on boot. Where
should I be editing for the hosted-engine?
Thanks,
Hanson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
My mistake. Needed NM_CONTROLLED=no added.
On 08/19/2016 12:58 PM, Hanson wrote:
Hi Guys,
I have edited /etc/sysconfig/network-scripts/ifcfg-eth0 &1 for the
various subnets we needed.
Somewhere along the line, when the hosted-engine boots, eth1 comes up
but eth0 does not. If I login u
data of a
currently deployed new host?
ie like a --mode=restore --option=force ?
or is there another way to restore?
Thanks,
Hanson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi Guys,
Is there any optimizations for using FreeBSD 10.3 with ovirt?
The guest OS works fine, however, the engine frequently lists the wrong
status for the VM, like rebooting, booting etc.
ovirt-guest-tools doesn't support freebsd from what I can find.
Thanks!
__
Hi Guys,
I encountered an unfortunate circumstance today. Possibly an achillies
heel.
I have three hypervisors, HV1, HV2, HV3, all running gluster for hosted
engine support. Individually they all pointed to HV1:/hosted_engine with
backupvol=HV2,HV3...
HV1 lost its bootsector, which was dis
Hi Guys,
I've converted my lab from using 802.3ad with bonding>bridged vlans to
one link with two vlan bridges and am now having traffic jumping to the
gateway when moving VM's/ISO/etc.
802.3ad = node1>switch1>node2
801.1q = node1>switch1>gateway>switch1>node2
I assume I've setup the same vl
2016 10:38 AM, Hanson wrote:
Hi Guys,
I've converted my lab from using 802.3ad with bonding>bridged vlans to
one link with two vlan bridges and am now having traffic jumping to
the gateway when moving VM's/ISO/etc.
802.3ad = node1>switch1>node2
801.1q = node1>switch
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running
in a gluster storage domain.
Put node in maintenance mode and disable g
Hi Guys,
I've converted my lab from using 802.3ad with bonding>bridged vlans to
one link with two vlan bridges and am now having traffic jumping to the
gateway when moving VM's/ISO/etc.
802.3ad = node1>switch1>node2
801.1q = node1>switch1>gateway>switch1>node2
I assume I've setup the same vl
2016 10:38 AM, Hanson wrote:
Hi Guys,
I've converted my lab from using 802.3ad with bonding>bridged vlans to
one link with two vlan bridges and am now having traffic jumping to
the gateway when moving VM's/ISO/etc.
802.3ad = node1>switch1>node2
801.1q = node1>switch
Hi Fernando,
Not anything spectacular that I have seen, but I'm using 16GB minimum
each node.
Probably want to setup your hosted-engine as 2cpu, 4096mb ram. I believe
that's the min reqs.
Thanks,
Hanson
On 07/15/2016 09:48 AM, Fernando Frediani wrote:
Hi folks,
I have a f
ge gluster from GUI then.
1. Setting global maintenance mode,
2. upgrading the engine on the engine VM as for a regular engine,
3. exit the global maintenance mode
4. upgrade the host (once at time!!!) from the engine
should be enough
Bye
Am 25.01.2017 um 21:32 schrieb Hanson:
Hi Guys,
M as for a regular engine,
3. exit the global maintenance mode
4. upgrade the host (once at time!!!) from the engine
should be enough
Bye
Am 25.01.2017 um 21:32 schrieb Hanson:
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading t
and run the deploy again?
Thanks,
Hanson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
ovirt
gui/dashboard. The host goes unresponsive and then another host power
cycles it.
Thanks,
Hanson
On 03/21/2018 06:12 AM, Sahina Bose wrote:
On Tue, Mar 20, 2018 at 9:41 PM, Hanson Turner
mailto:han...@andrewswireless.net>> wrote:
Hi Guys,
I've a 3 machine pool runn
n the downed node comes back.
IE the only one to lose pings is the Hosted-Engine. Unless ofcourse
there were VM's on the same node, in which case, if they were HA VMs
they will be restarted/resumed depending on your settings.
Thanks,
Hanson
On 08/17/2016 05:06 AM, Carlos Rodrigues wrot
Hi Guys,
We went to physically move a node and have found the node will not boot
successfully anymore.
The error coming up is :failed to open/ file not found:
\EFI\BOOT\grubx64.efi
This file is found in \EFI\centos\grubx64.efi.
I have copied it to \EFI\BOOT\ and got the machine to boot, ho
Hi Guys,
I've got 60 some odd files for each of the nodes in the cluster, they
don't seem to be syncing.
Running a volume heal engine full, reports successful. Running volume
heal engine info reports the same files, and doesn't seem to be syncing.
Running a volume heal engine info split-bra
_feature_version=1
timestamp=115910 (Mon Jun 18 09:43:20 2018)
host-id=1
score=3400
vm_conf_refresh_time=115910 (Mon Jun 18 09:43:20 2018)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
---clipped---
On 06/16/2018 02:23 PM,
Hi Guys,
My engine has corrupted, and while waiting for help, I'd like to see if
I can pull some data off the VM's to re purpose back onto dedicated
hardware.
Our setup is/was a gluster based storage system for VM's. The gluster
data storage I'm assuming is okay, I think the hosted engine is
There you can modify the configs, make backups etc.
Thanks,
Hanson
On 06/19/2018 09:31 AM, Hanson Turner wrote:
Hi Sahina,
Thanks for your reply, I can copy the files off without issue. Using
either a remote mount gluster, or just use the node and scp the files
to where I want them.
I
t be the same cause) as
listed here:
https://bugzilla.redhat.com/show_bug.cgi?id=1569827
Thanks,
Hanson
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/si
pshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirtnode1:/gluster_bricks/engine/engine
Brick2: ovirtnode3:/gluster_bricks/engine/engine
Brick3: ovirtnode4:/gluster_bricks/engine/engine
Thanks,
Hanson
On 07/02/2018 07:09 AM, Ravishankar N wrote:
On 07/02/2018
24 matches
Mail list logo