[ovirt-users] Re: HE + Gluster : Engine corrupted?

2018-07-03 Thread Hanson Turner
k to you in some time.

-Krutika

On Fri, Jun 29, 2018 at 7:47 PM, Hanson Turner
mailto:han...@andrewswireless.net>>
wrote:

Hi Krutika,

Did you need any other logs?


Thanks,

    Hanson


On 06/27/2018 02:04 PM, Hanson Turner wrote:


Hi Krutika,

Looking at the email spams, it looks like it started at
8:04PM EDT on Jun 15 2018.

From my memory, I think the cluster was working fine until
sometime that night. Somewhere between midnight and the next
(Saturday) morning, the engine crashed and all vm's stopped.

I do have nightly backups that ran every night, using the
engine-backup command. Looks like my last valid backup was
2018-06-15.

I've included all logs I think might be of use. Please
forgive the use of 7zip, as the raw logs took 50mb which is
greater than my attachment limit.

I think the just of what happened, is we had a downed node
for a period of time. Earlier that day, the node was brought
back into service. Later that night or early the next
morning, the engine was gone and hopping from node to node.

I have tried to mount the engine's hdd file to see if I
could fix it. There are a few corrupted partitions, and
those are xfs formatted. Trying to mount gives me issues
about needing repaired, trying to repair gives me issues
about needing something cleaned first. I cannot remember
exactly what it was, but it wanted me to run a command that
ended -L to clear out the logs. I said no way and have left
the engine vm in a powered down state, as well as the
cluster in global maintenance.

I can see no sign of the vm booting, (ie no networking)
except for what I've described earlier in the VNC session.


Thanks,

Hanson



On 06/27/2018 12:04 PM, Krutika Dhananjay wrote:

Yeah, complete logs would help. Also let me know when you
saw this issue - data and approx time (do specify the
timezone as well).

    -Krutika

On Wed, Jun 27, 2018 at 7:00 PM, Hanson Turner
mailto:han...@andrewswireless.net>> wrote:

#more
rhev-data-center-mnt-glusterSD-ovirtnode1.abcxyzdomains.net

<http://rhev-data-center-mnt-glusterSD-ovirtnode1.abcxyzdomains.net>\:_engine.log
[2018-06-24 07:39:12.161323] I
[glusterfsd-mgmt.c:1888:mgmt_getspec_cbk] 0-glusterfs:
No change in volfile,continuing

# more gluster_bricks-engine-engine.log
[2018-06-24 07:39:14.194222] I
[glusterfsd-mgmt.c:1888:mgmt_getspec_cbk] 0-glusterfs:
No change in volfile,continuing
[2018-06-24 19:58:28.608469] E [MSGID: 101063]
[event-epoll.c:551:event_dispatch_epoll_handler]
0-epoll: stale fd found on idx=12, gen=1, events=1,
slot->gen=3
[2018-06-25 14:24:19.716822] I
[addr.c:55:compare_addr_and_update]
0-/gluster_bricks/engine/engine: allowed = "*",
received addr = "192.168.0.57"
[2018-06-25 14:24:19.716868] I [MSGID: 115029]
[server-handshake.c:793:server_setvolume]
0-engine-server: accepted client from

CTX_ID:79b9d5b7-0bbb-4d67-87cf-11e27dfb6c1d-GRAPH_ID:0-PID:9901-HOST:sp3Kali-PC_NAME:engine-client-0-RECON_NO:-0
(version: 4.0.2)
[2018-06-25 14:45:35.061350] I [MSGID: 115036]
[server.c:527:server_rpc_notify] 0-engine-server:
disconnecting connection from

CTX_ID:79b9d5b7-0bbb-4d67-87cf-11e27dfb6c1d-GRAPH_ID:0-PID:9901-HOST:sp3Kali-PC_NAME:engine-client-0-RECON_NO:-0
[2018-06-25 14:45:35.061415] I [MSGID: 115013]
[server-helpers.c:289:do_fd_cleanup] 0-engine-server:
fd cleanup on

/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/images/82cde976-0650-4db9-9487-e2b52ffe25ee/e53806d9-3de5-4b26-aadc-157d745a9e0a
[2018-06-25 14:45:35.062290] I [MSGID: 101055]
[client_t.c:443:gf_client_unref] 0-engine-server:
Shutting down connection

CTX_ID:79b9d5b7-0bbb-4d67-87cf-11e27dfb6c1d-GRAPH_ID:0-PID:9901-HOST:sp3Kali-PC_NAME:engine-client-0-RECON_NO:-0
[2018-06-25 14:46:34.284195] I [MSGID: 115036]
[server.c:527:server_rpc_notify] 0-engine-server:
disconnecting connection from

CTX_ID:13e88614-31e8-4618-9f7f-067750f5971e-GRAPH_ID:0-PID:2615-HOST:workbench-PC_NAME:engine-client-0-RECON_NO:-0
[2018-06-25 14:46:34.284546] I [MSGID: 101055]
[client_t.c:443:gf_client_unref] 0-engine-server:
Shutting down connection

CTX_ID:13e88614-31e8-4618-9f7f-067750f5971e-GRAPH_ID:0-PID:2615-HOST:wor

[ovirt-users] HE + Gluster : Engine corrupted?

2018-06-20 Thread Hanson Turner

Hi Benny,

Who should I be reaching out to for help with a gluster based hosted 
engine corruption?



--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirtnode1.abcxyzdomains.net
Host ID    : 1
Engine status  : {"reason": "failed liveliness 
check", "health": "bad", "vm": "up", "detail": "Up"}

Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 92254a68
local_conf_timestamp   : 115910
Host timestamp : 115910
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=115910 (Mon Jun 18 09:43:20 2018)
    host-id=1
    score=3400
    vm_conf_refresh_time=115910 (Mon Jun 18 09:43:20 2018)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False


My when I VNC into my HE, All I get is:
Probing EDD (edd=off to disable)... ok


So, that's why it's failing the liveliness check... I cannot get the 
screen on HE to change short of ctl-alt-del which will reboot the HE.

I do have backups for the HE that are/were run on a nightly basis.

If the cluster was left alone, the HE vm would bounce from machine to 
machine trying to boot. This is why the cluster is in maintenance mode.
One of the nodes was down for a period of time and brought back, 
sometime through the night, which is when the automated backup kicks, 
the HE started bouncing around. Got nearly 1000 emails.


This seems to be the same error (but may not be the same cause) as 
listed here:

https://bugzilla.redhat.com/show_bug.cgi?id=1569827

Thanks,

Hanson

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NLA2URX3KN44FGFUVV4N5EJBPICABHH/


[ovirt-users] Re: Ovirt + Gluster : How do I gain access to the file systems of the VMs

2018-06-19 Thread Hanson Turner

Hi Guys,

I've an answer... Here's how I did it...

First, I needed kpartx ... so

#apt-get install kpartx

Then setup a loopback device for the raw hdd image

#losetup /dev/loop4 [IMAGE FILE]

#kpartx -a /dev/loop4

This allowed me to mount the various partitions included in the VM. 
There you can modify the configs, make backups etc.


Thanks,

Hanson


On 06/19/2018 09:31 AM, Hanson Turner wrote:


Hi Sahina,

Thanks for your reply, I can copy the files off without issue. Using 
either a remote mount gluster, or just use the node and scp the files 
to where I want them.


I was asking how to/do I mount the VM's disk in a way to be able to 
pull/modify files that are on the HDD of the VM.


Thanks,

Hason


On 06/19/2018 05:02 AM, Sahina Bose wrote:



On Mon, Jun 18, 2018 at 5:12 PM, Hanson Turner 
mailto:han...@andrewswireless.net>> wrote:


Hi Guys,

My engine has corrupted, and while waiting for help, I'd like to
see if I can pull some data off the VM's to re purpose back onto
dedicated hardware.

Our setup is/was a gluster based storage system for VM's. The
gluster data storage I'm assuming is okay, I think the hosted
engine is hosed, and needs restored, but that's another thread.

I can copy the raw disk file off of the gluster data domain.
What's the best way to mount it short of importing it into
another gluster domain?

With vmware, we can grab the disk file and move it from server to
server without issue. You can mount and explore contents with
workstation.


If you want to copy the image file. you can mount the gluster volume 
and copy it.

using mount -t glusterfs :/ 


What do we have available to us for ovirt?

Thanks,

___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
<https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/>






___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZBEM7WJ5SGRIQLC53GKZSFXYMEOXLRW/


[ovirt-users] Ovirt + Gluster : How do I gain access to the file systems of the VMs

2018-06-18 Thread Hanson Turner

Hi Guys,

My engine has corrupted, and while waiting for help, I'd like to see if 
I can pull some data off the VM's to re purpose back onto dedicated 
hardware.


Our setup is/was a gluster based storage system for VM's. The gluster 
data storage I'm assuming is okay, I think the hosted engine is hosed, 
and needs restored, but that's another thread.


I can copy the raw disk file off of the gluster data domain. What's the 
best way to mount it short of importing it into another gluster domain?


With vmware, we can grab the disk file and move it from server to server 
without issue. You can mount and explore contents with workstation.


What do we have available to us for ovirt?

Thanks,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/


[ovirt-users] Re: Gluster not synicng changes between nodes for engine

2018-06-18 Thread Hanson Turner

Ok,

So removing one downed node cleared all the non syncing issues.

In the mean time, when that one node was coming back, it seems to have 
corrupted the hosted-engine vm.


Remote-Viewer nodeip:5900, the console shows:

Probing EDD (edd=off to disable)... ok


Doesn't matter which of the three remaining nodes try to launch the 
engine, the engine comes up the same.


Had to set cluster to global maintenance, as the engine will keep trying 
to start off different nodes.


I do have backups run nightly so I can restore engine vm, however, I 
don't see a straight forward method of restoring the engine vm in a 
hosted-engine gluster setup.



Can any of the redhat boys help?


Here's the hosted-engine --vm-status

--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirtnode1.abcxyzdomains.net
Host ID    : 1
Engine status  : {"reason": "failed liveliness 
check", "health": "bad", "vm": "up", "detail": "Up"}

Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 92254a68
local_conf_timestamp   : 115910
Host timestamp : 115910
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=115910 (Mon Jun 18 09:43:20 2018)
    host-id=1
    score=3400
    vm_conf_refresh_time=115910 (Mon Jun 18 09:43:20 2018)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False

---clipped---




On 06/16/2018 02:23 PM, Hanson Turner wrote:

Hi Guys,

I've got 60 some odd files for each of the nodes in the cluster, they 
don't seem to be syncing.


Running a volume heal engine full, reports successful. Running volume 
heal engine info reports the same files, and doesn't seem to be syncing.


Running a volume heal engine info split-brain, there's nothing listed 
in split-brain.


Peers show as connected. Gluster volumes are started/up.

Hosted-engine --vm-status reports :
The hosted engine configuration has not been retrieved from shared 
storage. Please ensure that ovirt-ha-agent is running and the storage 
server is reachable.


This is leaving the cluster in an engine down with all vm's down state...

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YPNWM222K2U7NX32CIME7KINWPCLBSCR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DYWVSXDP3BZGV5XKBZS3RTYN4H6OZVRR/


[ovirt-users] Gluster not synicng changes between nodes for engine

2018-06-16 Thread Hanson Turner

Hi Guys,

I've got 60 some odd files for each of the nodes in the cluster, they 
don't seem to be syncing.


Running a volume heal engine full, reports successful. Running volume 
heal engine info reports the same files, and doesn't seem to be syncing.


Running a volume heal engine info split-brain, there's nothing listed in 
split-brain.


Peers show as connected. Gluster volumes are started/up.

Hosted-engine --vm-status reports :
The hosted engine configuration has not been retrieved from shared 
storage. Please ensure that ovirt-ha-agent is running and the storage 
server is reachable.


This is leaving the cluster in an engine down with all vm's down state...

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YPNWM222K2U7NX32CIME7KINWPCLBSCR/


[ovirt-users] Grubx64.efi missing from boot partition

2018-06-12 Thread Hanson Turner

Hi Guys,

We went to physically move a node and have found the node will not boot 
successfully anymore.


The error coming up is :failed to open/ file not found:

\EFI\BOOT\grubx64.efi


This file is found in \EFI\centos\grubx64.efi.

I have copied it to \EFI\BOOT\ and got the machine to boot, however, it 
has no working networking.


Modifying /etc/sysconfig/ifcfg-xyzabz123 appropriately restored 
networking, however, the node is not able to be resumed/activated.


This is probably because the blade reports the following interfaces:

eno1, eno2, eno3, eno4

When originally deployed, they were eno1,eno2, eth0, eth1.

The engine still sees the host as having the eno+eth combo.

Any ideas guys?

Thanks!


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R7QVPQDVBKSVIHA5KZF4AJ63R5BMGJEV/


Re: [ovirt-users] HostedEngine with HA

2018-03-22 Thread Hanson Turner

Hi Carlos,

If you're using shared storage across the nodes/hypervisors, the 
HostedEngine VM should already be HA.


Sometimes this allows the engine to drop while restarting it on another 
node. Usually when this happens the rest of the nodes running VMs stay 
up, and things resync when the downed node comes back.


IE the only one to lose pings is the Hosted-Engine. Unless ofcourse 
there were VM's on the same node, in which case, if they were HA VMs 
they will be restarted/resumed depending on your settings.



Thanks,

Hanson


On 08/17/2016 05:06 AM, Carlos Rodrigues wrote:

Anyone can help me to build HA on HostedEngine VM?

How can i guarantee that if host with HostedEngine VM goes down, the
HostedEngine VM moves to another host?

Regards,
Carlos Rodrigues

On Tue, 2016-08-16 at 11:53 +0100, Carlos Rodrigues wrote:

On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:



On 12 August 2016 at 20:23, Carlos Rodrigues 
wrote:

Hello,

I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.

When i shutdown the network of host with HostedEngine VM,  it
should be
possible the HostedEngine VM migrate automatically to another
host?


migrate on which network?
  

What is the expected behaviour on this HA scenario?

After a few minutes your vm will be shutdown by the High
Availability
agent, as it can't see network, and started on another host.


I'm testing this scenario and after shutdown network, it should be
expected that agent shutdown ha and started on another host, but
after
couple minutes nothing happens and on host with network we getting
the
following messages:

Aug 16 11:44:08 ied-blade11.install.eurotux.local ovirt-ha-
agent[2779]:
ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf

I think the HA agent its trying to get vm configuration but some how
it
can't get vm.conf to start VM.

Regards,
Carlos Rodrigues




Regards,

--
Carlos Rodrigues

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1)

2018-03-21 Thread Hanson Turner

Hi Sahina,

On the fourth node, I've found 
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirtnode1.core\:_engine.log 
... is this the engine.log you're referring to or do you want one from 
the hosted engine?


I actually do want to go replica 5. Most VM's it runs are small(1 
Core,1gb Ram,8gb HDD) and HA is needed. I'd like a bigger critical 
margin than one node failing.


As far as the repos, it's a straight the ovirtnode iso install, I think 
it's Node 4.2.0... which is yum updated to 4.2.1.1
When I installed 4.0 I'd installed on top of centos. This round I went 
straight with the node os because of simplicity in updating.


I can manually restart gluster from cli, the peer and volume status show 
no peers or volumes.


One thing of note, the networking is still as setup from the node 
install. I cannot change the networking info from the ovirt 
gui/dashboard. The host goes unresponsive and then another host power 
cycles it.


Thanks,
Hanson

On 03/21/2018 06:12 AM, Sahina Bose wrote:



On Tue, Mar 20, 2018 at 9:41 PM, Hanson Turner 
mailto:han...@andrewswireless.net>> wrote:


Hi Guys,

I've a 3 machine pool running gluster with replica 3 and want to
add two more machines.

This would change to a replica 5...


Adding 2 more nodes to cluster will not change it to a replica 5. 
replica 3 is a configuration on the gluster volume. I assume you don't 
need a replica 5, but just to add more nodes (and possibly new gluster 
volumes) to the cluster?



In ovirt 4.0, I'd done everything manually. No problem there.

In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks
like the fourth node has been added to the pool but will not go
active. It complains gluster isn't running (which I've not
manually configured /dev/sdb for gluster). Host install+deploy
fails. Host can go into maintenance w/o issue. (Meaning the host
has been added to the cluster, but isn't operational)


Are the repos configured correctly on the new nodes? Does the oVirt 
cluster where the nodes are being added have "Enable Gluster Service" 
enabled?



What do I need to do to get the node up and running proper with
gluster syncing properly? Manually restarting gluster, tells me
there's no peers and no volumes.

Do we have a wizard for this too? Or do I need to go find the
setup scripts and configure hosts 4 + 5 manually and run the
deploy again?


The host addition flow should take care of installing gluster.
Can you share the engine log from when the host was added to when it's 
reported non-operational?




Thanks,

Hanson

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1)

2018-03-20 Thread Hanson Turner

Hi Guys,

I've a 3 machine pool running gluster with replica 3 and want to add two 
more machines.


This would change to a replica 5...

In ovirt 4.0, I'd done everything manually. No problem there.

In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks like 
the fourth node has been added to the pool but will not go active. It 
complains gluster isn't running (which I've not manually configured 
/dev/sdb for gluster). Host install+deploy fails. Host can go into 
maintenance w/o issue. (Meaning the host has been added to the cluster, 
but isn't operational)


What do I need to do to get the node up and running proper with gluster 
syncing properly? Manually restarting gluster, tells me there's no peers 
and no volumes.


Do we have a wizard for this too? Or do I need to go find the setup 
scripts and configure hosts 4 + 5 manually and run the deploy again?



Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Official Hyperconverged Gluster oVirt upgrade procedure?

2018-01-23 Thread Hanson Turner

Hi Guys,

What the official upgrade procedure now? Has anything changed?

What's the upgrade path from 4.0.4 to current stable? 
Engine-upgrade-check returns no upgrades needed, and that's probably 
correct, as it looks like the repos are from 4.0


Tying yum update on node/hosts fails on gluster, and I'm leary on update 
the repo to pull the newer gluster packages because of potential issues 
of the other two hosts being on older glusters.


Thanks!


On 01/26/2017 08:26 AM, Simone Tiraboschi wrote:



On Thu, Jan 26, 2017 at 10:16 AM, Ralf Schenk > wrote:


Hello,

i would appreciate any hint, too. I'm on 4.0.6 on Centos 7.3 since
yesterday but I'm frightened what I need to do to upgrade and be
able to manage gluster from GUI then.


1. Setting global maintenance mode,
2. upgrading the engine on the engine VM as for a regular engine,
3. exit the global maintenance mode
4. upgrade the host (once at time!!!) from the engine

should be enough

Bye


Am 25.01.2017 um 21:32 schrieb Hanson:

Hi Guys,

Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt
gluster pool?

Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine
running in a gluster storage domain.

Put node in maintenance mode and disable glusterfs from ovirt
gui, run yum update?

Thanks!

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



-- 



*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70 
fax +49 (0) 24 05 / 40 83 759 
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
Dipl.-Kfm. Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Official Hyperconverged Gluster oVirt upgrade procedure?

2018-01-17 Thread Hanson Turner

Hi Guys,

What the official upgrade procedure now? Has anything changed?

What's the upgrade path from 4.0.4 to current stable? 
Engine-upgrade-check returns no upgrades needed, and that's probably 
correct, as it looks like the repos are from 4.0


Thanks!


On 01/26/2017 08:26 AM, Simone Tiraboschi wrote:



On Thu, Jan 26, 2017 at 10:16 AM, Ralf Schenk > wrote:


Hello,

i would appreciate any hint, too. I'm on 4.0.6 on Centos 7.3 since
yesterday but I'm frightened what I need to do to upgrade and be
able to manage gluster from GUI then.


1. Setting global maintenance mode,
2. upgrading the engine on the engine VM as for a regular engine,
3. exit the global maintenance mode
4. upgrade the host (once at time!!!) from the engine

should be enough

Bye


Am 25.01.2017 um 21:32 schrieb Hanson:

Hi Guys,

Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt
gluster pool?

Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine
running in a gluster storage domain.

Put node in maintenance mode and disable glusterfs from ovirt
gui, run yum update?

Thanks!

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



-- 



*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70 
fax +49 (0) 24 05 / 40 83 759 
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
Dipl.-Kfm. Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt + Gluster Hyperconverged

2016-07-18 Thread Hanson Turner

Hi Fernando,

Not anything spectacular that I have seen, but I'm using 16GB minimum 
each node.


Probably want to setup your hosted-engine as 2cpu, 4096mb ram. I believe 
that's the min reqs.


Thanks,

Hanson


On 07/15/2016 09:48 AM, Fernando Frediani wrote:

Hi folks,

I have a few servers with reasonable amount of raw storage but they 
are 3 with only 8GB of memory each.
I wanted to have them with an oVirt Hyperconverged + Gluster mainly to 
take advantage of the amount of the storage spread between them and 
have ability to live migrate VMs.


Question is: Does running Gluster on the same Hypervisor nodes 
consumes any significant memory that won't be much left for running VMs ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
-
- Network Engineer  -
-
-  Andrews Wireless -
- 671 Durham road 21-
-Uxbridge ON, L9P 1R4   -
-P: 905.852.8896-
-F: 905.852.7587-
- Toll free  (877)852.8896  -
-

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users