On Mon, Aug 6, 2018 at 10:17 PM, Jayme wrote:
> Just wanted to comment on this again. Today I rebuilt my oVirt environment
> as I wanted to change disk/volume layout one final time before making use
> of the cluster. I downloaded the most recent oVirt node image linked off
> the ovirt site and
On Tue, Aug 7, 2018 at 8:23 PM, Jayme wrote:
> Recently built a three host HCI with oVirt node 4.2.5. I am seeing the
> following error in each hosts syslog often. What does it mean and how can
> it be corrected?
>
Adding Denis. Can you check if vdo module is available on your hosts?
The metho
On Fri, Aug 10, 2018 at 10:08 PM, wrote:
> today our network administration did some upgrades on the networking
> equipment, so the engine vlan went down for a while. Afterwards when it
> came back up 3 hosts was found as non responding, I couldn't see anything
> suspicious on the hosts, the prob
On Thu, Aug 23, 2018 at 10:18 PM, Simone Tiraboschi
wrote:
>
>
> On Thu, Aug 23, 2018 at 4:58 PM Gianluca Cecchi
> wrote:
>
>> On Thu, Aug 23, 2018 at 4:07 PM femi adegoke
>> wrote:
>>
>>> Simone,
>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_
>>> node_hyperconverge
On Mon, Aug 27, 2018 at 5:51 PM, Robert O'Kane wrote:
> I had a bug request in Bugzilla for Gluster being killed due to a memory
> leak. The Gluster People say it is fixed in gluster-3.12.13
>
> When will Ovirt have this update? I am getting tired of having to restart
> my hypervisors every week
On Mon, Aug 27, 2018 at 5:49 PM, Donny Davis wrote:
> I just spun up the latest and greatest ovirt has to offer, and I am
> building out an HCI cluster. The deployment went wonderfully. I had dns
> setup for everything, and it just worked. Great job team!
>
> I just wanted to add in something i n
On Tue, Aug 28, 2018 at 10:19 PM, Jayme wrote:
> Is there an updated guide for setting up GlusterFS geo-replication? What
> I am interested in is having another oVirt setup on a separate server with
> glusterFS volume replicated to it. If my primary cluster went down I would
> be able to start
he same error. Not sure what causes it to use that interface.
> Please help!
>
> But I give the engine an IP of 192.168.1.10 same subnet as my gateway and
> my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log
> and vdsm.log.
>
> Your assistance is always a
Did you see the
https://ovirt.org/develop/release-management/features/gluster/gluster-geo-replication/#create-a-new-geo-replication-session
?
You can set up a new session from oVirt only if you are managing the remote
(slave) gluster cluster from oVirt as well. Otherwise you can either use
gluster
On Tue, Sep 11, 2018 at 6:47 AM, Keith Winn
wrote:
> Hi,
>
> I am looking at using Ovirt on a single machine. I used to use the
> all-in-one setup a while ago. I have only two vms and setting up two
> servers and a storage is out of my budget right now. The all in one
> solution sounds good, b
On Tue, Sep 11, 2018 at 2:13 AM, wrote:
> It seems that a vm with 3 disks boot in domain engine another disk in
> domain vol1 and a third in domain v3 became non responsive when one gluster
> host went down.
> To explain a bit the situation I have 3 glusterfs hosts with 3 volumes
> hosts are g1,g
On Tue, Sep 11, 2018 at 1:52 PM, Sahina Bose wrote:
>
>
> On Tue, Sep 11, 2018 at 2:13 AM, wrote:
>
>> It seems that a vm with 3 disks boot in domain engine another disk in
>> domain vol1 and a third in domain v3 became non responsive when one gluster
>> host went
On Wed, Sep 26, 2018 at 4:47 PM Simon Nussbaum wrote:
> Dear all
>
> We are very happily running a hyperconverged gluster ovirt setup since
> beginning of 2016. Because we couldn't afford 3 well-equipped servers,
> we've set-up replica 3 with one arbiter gluster cluster. Back then the
> gluster i
There's the ansible playbooks that you can use -
https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansible-deployment
On Thu, Sep 3, 2020 at 12:26 AM Michael Thomas wrote:
> Is there a CLI for setting up a hyperconverged environment with
> glusterfs? The docs that I've found d
e this is a bug on gluster side.
>
> >
> >
> > De: "Nir Soffer"
> > Para: supo...@logicworks.pt
> > Cc: "users" , "Sahina Bose" ,
> "Krutika Dhananjay" , "Nisan, Tal" >
> > Enviadas: Dom
>
>> users@ovirt.org
>>
>>
>> To unsubscribe send an email to
>>
>> users-le...@ovirt.org
>>
>>
>> Privacy Statement:
>>
>> https://www.ovirt.org/site/privacy-policy/
>>
>>
>> oVirt Code of Conduct:
>>
>&
+Krutika Dhananjay and gluster ml
On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote:
>
> Hello,
>
> tl;dr We have disk corruption when doing live storage migration on oVirt
> 4.2 with gluster 3.12.15. Any idea why?
>
> We have a 3-node oVirt cluster that is both compute and gluster-storage.
>
On Wed, Mar 27, 2019 at 1:59 AM Arsène Gschwind
wrote:
>
> On Tue, 2019-03-26 at 18:09 +0530, Sahina Bose wrote:
>
> On Tue, Mar 26, 2019 at 3:00 PM Kaustav Majumder <
>
> kmaju...@redhat.com
>
> > wrote:
>
>
> Let me rephrase
>
>
>
On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair wrote:
>
> Hi Strahil,
>
> Ok. Looks like sharding should make the resyncs faster.
>
> I searched for more info on it, but couldn't find much.
> I believe it will still have to compare each shard to determine whether there
> are any changes that need t
On Fri, Mar 29, 2019 at 3:29 AM Arsène Gschwind
wrote:
> On Thu, 2019-03-28 at 12:18 +, Arsène Gschwind wrote:
>
> On Wed, 2019-03-27 at 12:19 +0530, Sahina Bose wrote:
>
> On Wed, Mar 27, 2019 at 1:59 AM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
On Fri, Mar 29, 2019 at 6:02 PM wrote:
>
> Hi,
>
> Any help?
>
> Thanks
>
> José
>
>
> From: supo...@logicworks.pt
> To: "users"
> Sent: Wednesday, March 27, 2019 11:21:41 AM
> Subject: Actual size bigger than virtual size
>
> Hi,
>
> I have an all in one ovirt 4.
On Tue, Apr 2, 2019 at 12:07 PM Sandro Bonazzola
wrote:
> Thanks to the 143 participants to oVirt Survey 2019!
> The survey is now closed and results are publicly available at
> https://bit.ly/2JYlI7U
> We'll analyze collected data in order to improve oVirt thanks to your
> feedback.
>
> As a fir
Is it possible you have not cleared the gluster volume between installs?
What's the corresponding error in vdsm.log?
On Tue, Apr 2, 2019 at 4:07 PM Leo David wrote:
>
> And there it is the last lines on the ansible_create_storage_domain log:
>
> 2019-04-02 10:53:49,139+0100 DEBUG var changed: h
2e6-8ccb05ae9e09 (api:54)
>
Any calls to "START connectStorageServer" in vdsm.log?
> Should I perform an "engine-cleanup", delete lvms from Cockpit and start it
> all over ?
I doubt if that would resolve issue since you did clean up files from the mount.
> Did a
On Wed, Apr 10, 2019 at 1:45 AM Ricardo Alonso wrote:
>
> After installing the second host via the web gui (4.3.2.1-1.el7), it fails to
> activate telling that wasn't possible to connect to the storage pool default
> (glusterfs). Those are the logs:
>
> vdsm.log
>
> 2019-04-09 15:54:07,409-0400
On Wed, Apr 10, 2019 at 8:51 PM wrote:
>
> Is it possible to change a self-hosted engine's storage domain's settings?
>
> I setup a 3-node ovirt + gluster cluster with a dedicated 'engine' storage
> domain. I can see through the administration portal that the engine storage
> domain is using a s
On Wed, Apr 3, 2019 at 5:33 PM Николаев Алексей
wrote:
>
> Hi comminuty!
>
> I have issue like this https://bugzilla.redhat.com/show_bug.cgi?id=1506373 on
> ovirt-engine 4.2.8.2-1.el7.
>
> Description of problem:
> VM disk left in LOCKED state when added.
>
> Version-Release number of selected co
On Tue, Apr 16, 2019 at 1:39 PM Leo David wrote:
>
> Hi Everyone,
> I have wrongly configured the main gluster volume ( 12 identical 1tb ssd
> disks, replica 3 distributed-replicated, across 6 nodes - 2 per node ) with
> arbiter one.
> Oviously I am wasting storage space in this scenario with th
On Tue, Apr 16, 2019 at 1:42 PM wrote:
>
> Hello
>
> I would like some suggestions on what type of solution with Gluster i should
> use.
>
> I have 4 hosts with 3 disks each, i want to user as much space as possible
> but also some redundancy, like raid5 or 6
> The 4 hosts are running oVirt on
On Tue, Apr 16, 2019 at 1:07 AM Stefan Wolf wrote:
>
> Hello all,
>
>
>
> after a powerloss the hosted engine won’t start up anymore.
>
> I ‘ve the current ovirt installed.
>
> Storage is glusterfs und it is up and running
>
>
>
> It is trying to start up hosted engine but it does not work, but I
read-only
>
> mount: wrong fs type, bad option, bad superblock on /dev/loop0,
>
>missing codepage or helper program, or other error
>
>
>
> In some cases useful info is found in syslog - try
>
>dmesg | tail or so.
>
> [root@kvm360 /]#
>
>
c2 d2
a32 c3 d3
>
> //Magnus
>
>
>
> From: Sahina Bose
> Sent: 16 April 2019 10:55
> To: Magnus Isaksson
> Cc: users
> Subject: Re: [ovirt-users] Gluster suggestions
>
> On Tue, Apr 16, 2019 at 1:42 PM wrote:
> >
> > Hello
> >
> >
There seem to be communication issues between vdsmd and supervdsmd
services. Can you check the status of both on the nodes? Perhaps try
restarting these
On Tue, Apr 23, 2019 at 6:01 PM wrote:
>
> I decided to add another cluster to the existing data center (Enable Virt
> Service + Enable Gluster
On Tue, Apr 30, 2019 at 3:35 PM Sandro Bonazzola
wrote:
>
>
> Il giorno ven 26 apr 2019 alle ore 01:56 ha
> scritto:
>
>> When I was able to load CentosOS as a host OS, Was able to use RDMA, but
>> it seems like the 4.3x branch of nodeNG is missing RDMA support? I enabled
>> rdma and started t
Rafi, can you take a look?
On Mon, May 6, 2019 at 10:29 PM wrote:
>
> this is what I see in the logs when I try to add RDMA:
>
> [2019-05-06 16:54:50.305297] I [MSGID: 106521]
> [glusterd-op-sm.c:2953:glusterd_op_set_volume] 0-management: changing
> transport-type for volume storage_ssd to tcp,
[Adding gluster-users ML]
The brick logs are filled with errors :
[2016-10-05 19:30:28.659061] E [MSGID: 113077]
[posix-handle.c:309:posix_handle_pump] 0-engine-posix: malformed internal
link
/var/run/vdsm/storage/0a021563-91b5-4f49-9c6b-fff45e85a025/d84f0551-0f2b-457c-808c-6369c6708d43/1b5a5e34-81
2818>), in state , has disconnected from glusterd.
> Thanks
>
>
>
> *From:* Sahina Bose [mailto:sab...@redhat.com]
> *Sent:* 05 October 2016 08:11
> *To:* Jason Jeffrey ; gluster-us...@gluster.org;
> Ravishankar Narayanankutty
> *Cc:* Simone Tiraboschi ;
On Tue, Oct 4, 2016 at 9:51 PM, Hanson wrote:
> Running iperf3 between node1 & node2, I can achieve almost 10gbps without
> ever going out to the gateway...
>
> So switching between port to port on the switch is working properly on the
> vlan.
>
> This must be a problem in the gluster settings? W
This looks like a bug displaying status in the UI (similar to
https://bugzilla.redhat.com/show_bug.cgi?id=1381175 ?). Could you also
attach engine logs from the timeframe that you notice the issue in UI.
Do all nodes in the cluster return peer status as Connected? (Engine logs
will help determine
On Fri, 17 May 2019 at 2:13 AM, Strahil Nikolov
wrote:
>
> >This may be another issue. This command works only for storage with 512
> bytes sector size.
>
> >Hyperconverge systems may use VDO, and it must be configured in
> compatibility mode to >support
> >512 bytes sector size.
>
> >I'm not sur
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer wrote:
> On Fri, May 17, 2019 at 7:54 AM Gobinda Das wrote:
>
>> From RHHI side default we are setting below volume options:
>>
>> { group: 'virt',
>> storage.owner-uid: '36',
>> storage.owner-gid: '36',
>> network.ping-timeout: '30',
>>
Adding Sachi
On Thu, May 9, 2019 at 2:01 AM wrote:
> This only started to happen with oVirt node 4.3, 4.2 didn't have issue.
> Since I updated to 4.3, every reboot the host goes into emergency mode.
> First few times this happened I re-installed O/S from scratch, but after
> some digging I found
On Sun, May 19, 2019 at 4:11 PM Strahil wrote:
> I would recommend you to postpone your upgrade if you use gluster
> (without the API) , as creation of virtual disks via UI on gluster is
> having issues - only preallocated can be created.
>
+Gobinda Das +Satheesaran Sundaramoorthi
Sas, can
To scale existing volumes - you need to add bricks and run rebalance on the
gluster volume so that data is correctly redistributed as Alex mentioned.
We do support expanding existing volumes as the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand vol
0.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
> u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
> filter.\n", "item": {"pvname": "/dev/sdd", "vgname
eturn True
> Best Regards,
> Strahil Nikolov
>
> В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose <
> sab...@redhat.com> написа:
>
>
> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is corr
+Sachidananda URS
On Wed, May 22, 2019 at 1:14 AM wrote:
> I'm sorry, i'm still working on my linux knowledge, here is the output of
> my blkid on one of the servers:
>
> /dev/nvme0n1: PTTYPE="dos"
> /dev/nvme1n1: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
> /dev/mapper/eui.0
On Tue, Jun 4, 2019 at 3:26 PM Strahil wrote:
> Hello All,
>
> I would like to ask how many of you use VDO before asking the oVirt Devs
> to assess a feature in oVirt for monitoring the size of the VDOs on
> hyperconverged systems.
>
> I think such warning, will save a lot of headaches, but it w
On Thu, Jun 20, 2019 at 11:17 AM Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:
> Hi community!
>
> Is it possible to continue using independent gluster 3.12 as a data domain
> with engine 4.3.5?
>
Yes.
___
> Users mailing list -- users@ovirt
On Mon, Jun 24, 2019 at 11:39 AM Robert Crawford <
robert.crawford4.14...@gmail.com> wrote:
> Hey Everyone,
>
> When in the server manager and creating a brick from the storage device
> the brick will fail whenever i attach a cache device to it.
>
> I'm not really sure why? It just says unknown.
>
On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal wrote:
> Hey!
>
> I used cockpit to deploy gluster.
> And the problem seems to be with
> 10.xx.xx.xx9:/engine 50G 3.6G 47G 8%
> /rhev/data-center/mnt/glusterSD/10.70.41.139:_engine
>
> Engine volume has 500G available
Did you manage to get past this error?
On Sat, Jun 29, 2019 at 3:32 AM Edward Berger wrote:
> Maybe there is something already on the disk from before?
> gluster setup wants it completely blank, no detectable filesystem, no
> raid, etc.
> see what is there with fdisk.-l see what PVs exist with
On Mon, Jul 1, 2019 at 2:06 AM Strahil Nikolov
wrote:
> I suspect this limitation is due to support obligations that come with a
> subscription from Red Hat.
> In oVirt , you don't have such agreement and thus no support even with a
> 3-node cluster.
>
>
This is correct.
Gluster scales up and ou
On Thu, Jul 11, 2019 at 11:15 PM Strahil wrote:
> I'm addding gluster-users as I'm not sure if you can go gluster v3 -> v6
> directly.
>
> Theoretically speaking , there should be no problem - but I don't know
> if you will observe any issues.
>
> @Gluster-users,
>
> Can someone share their tho
On Wed, Jul 17, 2019 at 7:50 PM Doron Fediuck wrote:
> Adding relevant folks.
> Sahina?
>
> On Thu, 11 Jul 2019 at 00:24, William Kwan wrote:
>
>> Hi,
>>
>> I need some direction to make sure we won't make more mistake in
>> recovering a 3-node self hosted engine with Gluster.
>>
>> Someone care
On Thu, Jul 18, 2019 at 9:02 PM Strahil Nikolov
wrote:
> According to this one (GlusterFS Storage Domain — oVirt) libgfapi support
> is disabled by default due to incompatibility with Live Storage Migration.
> VM can not be migrated to the GlusterFS storage domain.
> I guess someone from the dev
On Mon, Sep 9, 2019 at 7:22 PM Kaustav Majumder wrote:
> Well almost. Create a new cluster and (Check) Enable Gluster Service .
> Upon adding new hosts to this cluster (via ui) gluster will be
> automatically configured on them.
>
>
> On Mon, Sep 9, 2019 at 6:56 PM wrote:
>
>> So doing this see
On Mon, Aug 19, 2019 at 10:55 PM wrote:
> On my silent Atom based three node Hyperconverged journey I hit upon a
> snag: Evidently they are too slow for Ansible.
>
> The Gluster storage part went all great and perfect on fresh oVirt node
> images that I had configured to leave an empty partition
+Krutika Dhananjay +Sachidananda URS
Adrian, can you provide more details on the performance issue you're seeing?
We do have some scripts to collect data to analyse. Perhaps you can run
this and provide us the details in a bug. The ansible scripts to do this is
still under review -
https://gith
On Mon, Aug 26, 2019 at 10:57 PM wrote:
> Hi,
> Yes I have glusternet shows the role of "migration" and "gluster".
> Hosts show 1 network connected to management and the other to logical
> network "glusternet"
>
What does "ping host1.example.com" return? Does it return the IP address of
the net
ear to me if the update
>>> continues to run while the volumes are healing and resumes when they are
>>> done. There doesn’t seem to be any indication in the ui (unless I’m
>>> mistaken)
>>>
>>
>> Adding @Martin Perina , @Sahina Bose
>>and
More details on the tests you ran, and also gluster profile data while you
were running the tests can help analyse.
Similar to my request to another user on thread, you can also help provide
some feedback on the data gathering ansible scripts by trying out
https://github.com/gluster/gluster-ansible
Can you provide the output of gluster volume info before the remove-brick
was done. It's not clear if you were reducing the replica count or removing
a replica subvolume.
On Wed, Sep 11, 2019 at 4:23 PM wrote:
> Hi.
> There is an ovirt-hosted-engine on gluster volume engine
> Replicate replica
Ok, so you want to reduce the replica count to 2?
In this case there will not be any data migration. +Ravishankar
Narayanankutty
On Thu, Sep 12, 2019 at 2:12 PM wrote:
> Hi Sahina.
> It was
>
> gluster volume status engine
> Status of volume: engine
> Gluster process
On Mon, Sep 23, 2019 at 9:08 PM Amit Bawer wrote:
> +Sahina Bose
> Thanks for your clarification Julian, creating new Gluster brick from
> ovirt's REST API is not currently supported, only from UI.
>
That's right. We do have a bug tracking this -
https://bugzilla.redh
On Mon, Sep 23, 2019 at 6:34 AM TomK wrote:
> Or in other words, how do I remove all resources, clusters, datacenters,
> hosts and readd them under different names?
>
Does this answer your question -
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtuali
ters:{hostId='0c3943d4-b95a-41f4-bb9c-30731128e057'}),
>
> log id: 34d8d3d9
> 2019-09-24 09:07:27,386-04 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [51432e7f] Could not add brick
> 'mdskvm-p02.nix.mds.xyz:/mnt/p02
On Tue, Oct 8, 2019 at 4:18 PM wrote:
> Hi.
>
> I am confused now.
>
> removed all previous configuration and follwed instruction:
>
>
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged.html
>
> Than I read:
>
> Deploying on oVirt Node based Hosts
> oVirt
"Host host1.example.com cannot access the Storage Domain(s) attached to the
Data Center Default-DC1."
Can you check the vdsm logs from this host to check why the storage domains
are not attached?
On Thu, Oct 17, 2019 at 9:43 AM Strahil wrote:
> Ssh to host and check the status of :
> sanlock.s
On Wed, Oct 16, 2019 at 8:38 PM Jayme wrote:
> Is there a way to fix this on a hci deployment which is already in
> operation? I do have a separate gluster network which is chosen for
> migration and gluster network but when I originally deployed I used just
> one set of host names which resolve
is it safe to do this on a live
>> system or do all VMs need to be brought down first? How does resetting the
>> brick fix the issue with gluster peers using the server hostnames which are
>> attached to IPs on the ovirtmanagement network?
>>
>> On Thu, Oct 17, 201
On Tue, Oct 2, 2018 at 4:16 PM Artem Tambovskiy
wrote:
> Hi,
>
> Just run into the issue during cluster upgrade from 4.24 to 4.2.6.1. I'm
> running small cluster with 2 hosts and gluster storage. Once I upgraded one
> of the hosts to 4.2.6.1 something went wrong (looks like it tried to start
> HE
Can you provide the vdsm.log and supervdsm.log with the relevant log.
Adding Kaustav to look into this
On Fri, Oct 5, 2018 at 11:00 AM Maton, Brett
wrote:
>
> I'm seeing the following errors appear in the event log every 10 minutes
> for each participating host in the gluster cluster
>
> GetGlus
lib/yajsonrpc/__init__.py#L351
>> On Mon, Oct 8, 2018 at 7:32 AM Maton, Brett
>> wrote:
>> >
>> > Sure, log attached this one does have the JSON_RPC errors in it.
>> >
>> > Thanks,
>> > Brett
>> >
>> > On Mon, 8 Oct 2018 at 06:08,
On Tue, Oct 16, 2018 at 5:01 AM Ravi Chalasani wrote:
> Already have working three node 4.2.5 HA cluster in the lab using this
> guide:
> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
> .
>
> Now I would like to test a six or nine node cluster mentioned in the
not, we'll need to dig into this a bit more.
Or are you talking of the services enabled on Cluster?
>
> Thanks,
>
> Paul S.
> --
> *From:* Sahina Bose
> *Sent:* 16 October 2018 08:28
> *To:* rk...@humboldt.edu
> *Cc:* users
&
nfigure it at the cluster level so I was initially confused
> as to why it didn't work.
>
>
> I have been trying to find an option for otopi.
>
>
> Thanks,
>
> Paul S.
>
>
> --
> *From:* Sahina Bose
> *Se
On Tue, Oct 16, 2018 at 11:39 PM Spickiy Nikita
wrote:
> Hi, i have oVirt instance (4.2.1.6-1.el7.centos). So, i have cluster with
> gluster. Hosts periodically non response and VM's is not responding.
> Usually it happens after get message "command GetGlusterVolumeHealInfoVDS
> failed: Message t
On Tue, Oct 16, 2018 at 6:54 PM Gianluca Cecchi
wrote:
> Hello,
> I would like to cleanup a 3 hosts HCI install that didn't complete.
> If I connect to cockpit of second host it seems like an initial config
> screen.
> But if I connect to cockpit of the first host, that I have used in first
> att
On Wed, Oct 17, 2018 at 6:42 PM wrote:
> Hi,
>
> Anyone with experience with vdo on Hyperconverged with ovirt 4.2.7?
> Should I force thin provisioning for lv in gdeploy's conf in order to have
> working gluster snapshots?
>
> I am not sure about the status of dedup in ovirt 4.2/hyperconverged.
>
On Wed, Oct 17, 2018 at 7:15 PM wrote:
> Thank you for this information.
> I guess I should at least wait for that bug to be resolved before
> deploying in production. Do you have the bugzilla reference so I could
> track it?
>
https://bugzilla.redhat.com/show_bug.cgi?id=1600156
___
On Thu, Oct 18, 2018 at 4:39 AM Ravi Linux wrote:
> Let's try this question again since my last thread devolved into someone
> else's question that had nothing to do with this topic.
>
> Is there are way to setup a new oVirt-Gluster Hyperconverged
> infrastructure with six nodes ?
>
> All the doc
On Fri, Oct 19, 2018 at 7:06 PM TomK wrote:
> Hey All,
>
> Is there a newer package of glusterfs-gnfs available for GlusterFS 4.1?
> After upgrading to GlusterFS 4.1, all the hosts are now disconnected
> from the Ovirt engine.
>
No, and no plans to add it - see
https://www.spinics.net/lists/glus
On Sun, Oct 28, 2018 at 5:17 PM fsoyer wrote:
>
>
> Well guys,
> I can say now that I have a real problem, maybe between ovirt and gluster
> storage, but I can't be sure. Yesterday, I wanted to clone a VM (named
> "crij2") from a snapshot, but (this is another problem I think) the UI never
> ga
2018-10-25 01:21:07,944+0200 INFO (libvirt/events) [virt.vm]
(vmId='14fb9d79-c603-4691-b19e-9133c6bd5e22') abnormal vm stop device
ua-134c4848-6897-46fc-b346-dd4a180ac653 error eio (vm:5158)
2018-10-25 01:21:07,944+0200 INFO (libvirt/events) [virt.vm]
(vmId='14fb9d79-c603-4691-b19e-9133c6bd5e22') C
On Thu, Nov 8, 2018 at 8:13 PM Simone Tiraboschi wrote:
>
> Hi,
> adding also Sahina here.
> AFAIK it should be enabled by default in hyper-converged deployments.
>
> Can you please grep your deployment logs for ENABLE_LIBGFAPI?
No, libgfapi access is disabled by default due to lack of HA
(https:
On Fri, Nov 9, 2018 at 3:42 AM Dev Ops wrote:
>
> The switches above our environment had some VPC issues and the port channels
> went offline. The ports that had issues belonged to 2 of the gfs nodes in our
> environment. We have 3 storage nodes total with the 3rd being the arbiter. I
> wound u
On Wed, Nov 7, 2018 at 1:39 PM Sandro Bonazzola wrote:
>
>
> Il giorno mar 6 nov 2018 alle ore 17:27 Simon Coter <
> simon.co...@oracle.com> ha scritto:
>
>> Quick question on this:
>>
>> is there a solution/procedure to get from a standard installation to a
>> hyper converged one ?
>> Like if I
On Mon, Nov 5, 2018 at 8:09 PM Sandro Bonazzola wrote:
>
>
>
> Il giorno dom 4 nov 2018 alle ore 16:24 Jarosław Prokopowski
> ha scritto:
>>
>> Hi Guys,
>>
>> I would like to use GlusterFS distributed-replicated with arbiter volume on
>> 4 nodes for oVirt.
>> Can you tell me what tuning paramet
On Thu, Nov 22, 2018 at 5:51 PM Marco Lorenzo Crociani
wrote:
>
> Hi,
> I opened a bug on gluster because I have reading errors on files on a
> gluster volume:
> https://bugzilla.redhat.com/show_bug.cgi?id=1652548
>
> The files are many of the VMs images of the oVirt DATA storage domain.
> oVirt p
On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni
wrote:
> Hello Team,
>
> We are running a setup of 3-way replica HC gluster setup configured during
> the initial deployment from the cockpit console using ansible.
>
> NODE1
> - /dev/sda (OS)
> - /dev/sdb ( Gluster Bricks )
>* /glust
On Tue, Nov 13, 2018 at 4:46 PM fsoyer wrote:
>
> Hi all,
> I continue to try to understand my problem between (I suppose) oVirt anf
> Gluster.
> After my recents posts titled 'VMs unexpectidly restarted' that did not
> provide solution nor search idea, I submit to you another (related ?) proble
he 'Reset
>>> Brick' option is active.
>>> Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
>>>
>>> On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni <
>>> abhishek.sahni1...@gmail.com> wrote:
>>&
On Thu, Nov 29, 2018 at 10:36 PM florentl wrote:
>
> Hi everybody,
> I'm currently setting up an ovirt solution.
> I have three servers with glusterfs. They run hosted engine.
>
> I configured the power management of the node for using idrac agent (I
> have Dell servers).
> The communication is ok
On Tue, Dec 4, 2018 at 11:32 AM Abhishek Sahni
wrote:
> Hello Team,
>
>
> We are running a setup of 3-way replica HC gluster setup configured during
> the initial deployment from the cockpit console using ansible.
>
> NODE1
> - /dev/sda (OS)
> - /dev/sdb ( Gluster Bricks )
>* /glu
I think you may be running into
https://bugzilla.redhat.com/show_bug.cgi?id=1651516
On Thu, Dec 6, 2018 at 7:30 PM wrote:
>
> Hi,
>
> tried to setup hyperconverged with glusterfs. I used three i7 with two 1TB
> Disks and two NICs. Everythin worked fine till:
>
> [ INFO ] TASK [Set Engine public
n_hosts?
>
> Or do I miss something ?
>
>
> Am Freitag, den 07.12.2018, 12:17 +0530 schrieb Sahina Bose:
> > I think you may be running into
> > https://bugzilla.redhat.com/show_bug.cgi?id=1651516
> >
> > On Thu, Dec 6, 2018 at 7:30 PM
> > wrote:
Do you see the image on the gluster volume mount? Can you provide the
gluster volume options and version of gluster?
On Wed, 19 Dec 2018 at 4:04 PM, wrote:
> Hi,
>
> I have a all in one intallation with 2 glusters volumes.
> The disk of one VM filled up the brick, which is a partition. That
> pa
> glusterfs-server-3.8.12-1.el7.x86_64
> glusterfs-libs-3.8.12-1.el7.x86_64
> glusterfs-3.8.12-1.el7.x86_64
>
>
> Thanks
>
> José
>
>
> From: "Sahina Bose"
> To: supo...@logicworks.pt
> Cc: "users"
> Sent:
On Mon, Dec 17, 2018 at 2:01 PM wrote:
>
> Hello everyone,
>
> I've installed oVirt on 8 nodes of a MacroServer (SuperMicro Microcloud): 7
> Nodes with oVirt Node installed and 1 Node with Centos 7 and oVirt installed.
> The last one works like hypervisor and node.
>
> I would use all the storag
401 - 500 of 546 matches
Mail list logo