a bug on gluster side.
>
> >
> >
> > De: "Nir Soffer"
> > Para: supo...@logicworks.pt
> > Cc: "users" , "Sahina Bose" ,
> "Krutika Dhananjay" , "Nisan, Tal" >
> > Enviadas: Domingo, 1
There's the ansible playbooks that you can use -
https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansible-deployment
On Thu, Sep 3, 2020 at 12:26 AM Michael Thomas wrote:
> Is there a CLI for setting up a hyperconverged environment with
> glusterfs? The docs that I've found
Thanks Strahil.
Adding Sas and Ravi for their inputs.
On Sun, 21 Jun 2020 at 6:11 PM, Strahil Nikolov
wrote:
> Hello Sahina, Sandro,
>
> I have noticed that the ACL issue with Gluster (
> https://github.com/gluster/glusterfs/issues/876) is happening to
> multiple oVirt users (so far at least
ted.
You would need to create a custom ansible playbook that sets up the gluster
volumes and add the hosts to the existing engine. (or do the creation of
cluster and gluster volumes via the engine UI)
> Please let me know.
>
> Thank You
>
> C Williams
>
> On Tue, Jan 29, 2019 a
On Tue, Dec 24, 2019 at 3:26 AM wrote:
> Hi,
> After playing a bit with oVirt and Gluster in our pre-production
> environment for the last year, we have decided to move forward with a our
> production design using ovirt 4.3.7 + Gluster in a hyperconverged setup.
>
> For this we are looking get
+Sunny Kumar
On Thu, Dec 12, 2019 at 6:33 AM Strahil wrote:
> Hi Adrian,
>
> Have you checked the passwordless rsync between master and slave volume
> nodes ?
>
> Best Regards,
> Strahil NikolovOn Dec 11, 2019 22:36, adrianquint...@gmail.com wrote:
> >
> > Hi,
> > I am trying to setup
On Tue, Nov 12, 2019 at 3:27 AM wrote:
> I guess this is a little late now... but:
>
> I wanted to do the same, especially because the additional servers (beyond
> 3) would lower the relative storage cost overhead when using erasure
> coding, but it's 'off the trodden path' and I cannot
e what is it complaining about
> this time.
>
> Best Regards,
> Strahil Nikolov
>
> >Hi Sahina,
>
> >I have a strange situation:
> >1. When I try to access the file via 'sudo -u vdsm dd if=disk of=test
> bs=4M' the command fails on aprox 60MB.
> >2. If I
9981f67b/images/94f763e9-fd96-4bee-a6b2-31af841a918b/5b1d3113-5cca-4582-9029-634b16338a2f.
Was it reset after upgrade?
Are you able to copy this file to a different location and try running a VM
with this image?
Any errors in the mount log of gluster1:_data__fast volume?
> Best Regards,
> St
You will need to edit to provide the correct device during installation.
Check output of lsblk
On Mon, Nov 18, 2019 at 5:19 PM wrote:
> Logical Volumes Create new Logical Volume
> 1.35 TiB Pool for Thin Volumes pool00
> 1 GiB ext4 File System /dev/onn_ovirt1/home
> 1.32 TiB Inactive volume
On Mon, Nov 18, 2019 at 2:58 PM Sandro Bonazzola
wrote:
> +Sahina Bose +Gobinda Das +Nir
> Soffer +Tal Nisan can you please
> help here?
>
>
> Il giorno dom 17 nov 2019 alle ore 16:00 Strahil Nikolov <
> hunter86...@yahoo.com> ha scritto:
>
>> So far,
safe to do this on a live
>> system or do all VMs need to be brought down first? How does resetting the
>> brick fix the issue with gluster peers using the server hostnames which are
>> attached to IPs on the ovirtmanagement network?
>>
>> On Thu, Oct 17, 2019 at
On Wed, Oct 16, 2019 at 8:38 PM Jayme wrote:
> Is there a way to fix this on a hci deployment which is already in
> operation? I do have a separate gluster network which is chosen for
> migration and gluster network but when I originally deployed I used just
> one set of host names which
"Host host1.example.com cannot access the Storage Domain(s) attached to the
Data Center Default-DC1."
Can you check the vdsm logs from this host to check why the storage domains
are not attached?
On Thu, Oct 17, 2019 at 9:43 AM Strahil wrote:
> Ssh to host and check the status of :
>
On Tue, Oct 8, 2019 at 4:18 PM wrote:
> Hi.
>
> I am confused now.
>
> removed all previous configuration and follwed instruction:
>
>
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged.html
>
> Than I read:
>
> Deploying on oVirt Node based Hosts
>
9-09-24 09:07:27,386-04 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [51432e7f] Could not add brick
> 'mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02' to volume
> 'f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0' - server uuid
> 'ad7d956a-a121-
On Mon, Sep 23, 2019 at 6:34 AM TomK wrote:
> Or in other words, how do I remove all resources, clusters, datacenters,
> hosts and readd them under different names?
>
Does this answer your question -
On Mon, Sep 23, 2019 at 9:08 PM Amit Bawer wrote:
> +Sahina Bose
> Thanks for your clarification Julian, creating new Gluster brick from
> ovirt's REST API is not currently supported, only from UI.
>
That's right. We do have a bug tracking this -
https://bugzilla.redhat.com/sho
Ok, so you want to reduce the replica count to 2?
In this case there will not be any data migration. +Ravishankar
Narayanankutty
On Thu, Sep 12, 2019 at 2:12 PM wrote:
> Hi Sahina.
> It was
>
> gluster volume status engine
> Status of volume: engine
> Gluster process
Can you provide the output of gluster volume info before the remove-brick
was done. It's not clear if you were reducing the replica count or removing
a replica subvolume.
On Wed, Sep 11, 2019 at 4:23 PM wrote:
> Hi.
> There is an ovirt-hosted-engine on gluster volume engine
> Replicate replica
More details on the tests you ran, and also gluster profile data while you
were running the tests can help analyse.
Similar to my request to another user on thread, you can also help provide
some feedback on the data gathering ansible scripts by trying out
unclear to me if the update
>>> continues to run while the volumes are healing and resumes when they are
>>> done. There doesn’t seem to be any indication in the ui (unless I’m
>>> mistaken)
>>>
>>
>> Adding @Martin Perina , @Sahina Bose
>>and
On Mon, Aug 26, 2019 at 10:57 PM wrote:
> Hi,
> Yes I have glusternet shows the role of "migration" and "gluster".
> Hosts show 1 network connected to management and the other to logical
> network "glusternet"
>
What does "ping host1.example.com" return? Does it return the IP address of
the
+Krutika Dhananjay +Sachidananda URS
Adrian, can you provide more details on the performance issue you're seeing?
We do have some scripts to collect data to analyse. Perhaps you can run
this and provide us the details in a bug. The ansible scripts to do this is
still under review -
On Mon, Aug 19, 2019 at 10:55 PM wrote:
> On my silent Atom based three node Hyperconverged journey I hit upon a
> snag: Evidently they are too slow for Ansible.
>
> The Gluster storage part went all great and perfect on fresh oVirt node
> images that I had configured to leave an empty partition
On Mon, Sep 9, 2019 at 7:22 PM Kaustav Majumder wrote:
> Well almost. Create a new cluster and (Check) Enable Gluster Service .
> Upon adding new hosts to this cluster (via ui) gluster will be
> automatically configured on them.
>
>
> On Mon, Sep 9, 2019 at 6:56 PM wrote:
>
>> So doing this
On Thu, Jul 18, 2019 at 9:02 PM Strahil Nikolov
wrote:
> According to this one (GlusterFS Storage Domain — oVirt) libgfapi support
> is disabled by default due to incompatibility with Live Storage Migration.
> VM can not be migrated to the GlusterFS storage domain.
> I guess someone from the
On Wed, Jul 17, 2019 at 7:50 PM Doron Fediuck wrote:
> Adding relevant folks.
> Sahina?
>
> On Thu, 11 Jul 2019 at 00:24, William Kwan wrote:
>
>> Hi,
>>
>> I need some direction to make sure we won't make more mistake in
>> recovering a 3-node self hosted engine with Gluster.
>>
>> Someone
On Thu, Jul 11, 2019 at 11:15 PM Strahil wrote:
> I'm addding gluster-users as I'm not sure if you can go gluster v3 -> v6
> directly.
>
> Theoretically speaking , there should be no problem - but I don't know
> if you will observe any issues.
>
> @Gluster-users,
>
> Can someone share their
On Mon, Jul 1, 2019 at 2:06 AM Strahil Nikolov
wrote:
> I suspect this limitation is due to support obligations that come with a
> subscription from Red Hat.
> In oVirt , you don't have such agreement and thus no support even with a
> 3-node cluster.
>
>
This is correct.
Gluster scales up and
Did you manage to get past this error?
On Sat, Jun 29, 2019 at 3:32 AM Edward Berger wrote:
> Maybe there is something already on the disk from before?
> gluster setup wants it completely blank, no detectable filesystem, no
> raid, etc.
> see what is there with fdisk.-l see what PVs exist with
On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal wrote:
> Hey!
>
> I used cockpit to deploy gluster.
> And the problem seems to be with
> 10.xx.xx.xx9:/engine 50G 3.6G 47G 8%
> /rhev/data-center/mnt/glusterSD/10.70.41.139:_engine
>
> Engine volume has 500G available
On Mon, Jun 24, 2019 at 11:39 AM Robert Crawford <
robert.crawford4.14...@gmail.com> wrote:
> Hey Everyone,
>
> When in the server manager and creating a brick from the storage device
> the brick will fail whenever i attach a cache device to it.
>
> I'm not really sure why? It just says unknown.
On Thu, Jun 20, 2019 at 11:17 AM Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:
> Hi community!
>
> Is it possible to continue using independent gluster 3.12 as a data domain
> with engine 4.3.5?
>
Yes.
___
> Users mailing list --
On Tue, Jun 4, 2019 at 3:26 PM Strahil wrote:
> Hello All,
>
> I would like to ask how many of you use VDO before asking the oVirt Devs
> to assess a feature in oVirt for monitoring the size of the VDOs on
> hyperconverged systems.
>
> I think such warning, will save a lot of headaches, but it
+Sachidananda URS
On Wed, May 22, 2019 at 1:14 AM wrote:
> I'm sorry, i'm still working on my linux knowledge, here is the output of
> my blkid on one of the servers:
>
> /dev/nvme0n1: PTTYPE="dos"
> /dev/nvme1n1: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
>
елник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose <
> sab...@redhat.com> написа:
>
>
> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is correctly redistributed as Alex
> mentioned.
> We do support expandi
quot;},
> "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
> failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
> u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
> filter.\n",
To scale existing volumes - you need to add bricks and run rebalance on the
gluster volume so that data is correctly redistributed as Alex mentioned.
We do support expanding existing volumes as the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand
On Sun, May 19, 2019 at 4:11 PM Strahil wrote:
> I would recommend you to postpone your upgrade if you use gluster
> (without the API) , as creation of virtual disks via UI on gluster is
> having issues - only preallocated can be created.
>
+Gobinda Das +Satheesaran Sundaramoorthi
Sas, can
Adding Sachi
On Thu, May 9, 2019 at 2:01 AM wrote:
> This only started to happen with oVirt node 4.3, 4.2 didn't have issue.
> Since I updated to 4.3, every reboot the host goes into emergency mode.
> First few times this happened I re-installed O/S from scratch, but after
> some digging I
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer wrote:
> On Fri, May 17, 2019 at 7:54 AM Gobinda Das wrote:
>
>> From RHHI side default we are setting below volume options:
>>
>> { group: 'virt',
>> storage.owner-uid: '36',
>> storage.owner-gid: '36',
>> network.ping-timeout: '30',
On Fri, 17 May 2019 at 2:13 AM, Strahil Nikolov
wrote:
>
> >This may be another issue. This command works only for storage with 512
> bytes sector size.
>
> >Hyperconverge systems may use VDO, and it must be configured in
> compatibility mode to >support
> >512 bytes sector size.
>
> >I'm not
This looks like a bug displaying status in the UI (similar to
https://bugzilla.redhat.com/show_bug.cgi?id=1381175 ?). Could you also
attach engine logs from the timeframe that you notice the issue in UI.
Do all nodes in the cluster return peer status as Connected? (Engine logs
will help determine
On Tue, Oct 4, 2016 at 9:51 PM, Hanson wrote:
> Running iperf3 between node1 & node2, I can achieve almost 10gbps without
> ever going out to the gateway...
>
> So switching between port to port on the switch is working properly on the
> vlan.
>
> This must be a problem in the gluster settings?
2818>), in state , has disconnected from glusterd.
> Thanks
>
>
>
> *From:* Sahina Bose [mailto:sab...@redhat.com]
> *Sent:* 05 October 2016 08:11
> *To:* Jason Jeffrey ; gluster-us...@gluster.org;
> Ravishankar Narayanankutty
> *Cc:* Simone Tiraboschi ;
[Adding gluster-users ML]
The brick logs are filled with errors :
[2016-10-05 19:30:28.659061] E [MSGID: 113077]
[posix-handle.c:309:posix_handle_pump] 0-engine-posix: malformed internal
link
Rafi, can you take a look?
On Mon, May 6, 2019 at 10:29 PM wrote:
>
> this is what I see in the logs when I try to add RDMA:
>
> [2019-05-06 16:54:50.305297] I [MSGID: 106521]
> [glusterd-op-sm.c:2953:glusterd_op_set_volume] 0-management: changing
> transport-type for volume storage_ssd to
On Tue, Apr 30, 2019 at 3:35 PM Sandro Bonazzola
wrote:
>
>
> Il giorno ven 26 apr 2019 alle ore 01:56 ha
> scritto:
>
>> When I was able to load CentosOS as a host OS, Was able to use RDMA, but
>> it seems like the 4.3x branch of nodeNG is missing RDMA support? I enabled
>> rdma and started
There seem to be communication issues between vdsmd and supervdsmd
services. Can you check the status of both on the nodes? Perhaps try
restarting these
On Tue, Apr 23, 2019 at 6:01 PM wrote:
>
> I decided to add another cluster to the existing data center (Enable Virt
> Service + Enable
a32 c3 d3
>
> //Magnus
>
>
> ____
> From: Sahina Bose
> Sent: 16 April 2019 10:55
> To: Magnus Isaksson
> Cc: users
> Subject: Re: [ovirt-users] Gluster suggestions
>
> On Tue, Apr 16, 2019 at 1:42 PM wrote:
> >
> > Hello
> >
> > I would l
-only
>
> mount: wrong fs type, bad option, bad superblock on /dev/loop0,
>
>missing codepage or helper program, or other error
>
>
>
> In some cases useful info is found in syslog - try
>
>dmesg | tail or so.
>
> [root@kvm360 /]#
>
>
>
On Tue, Apr 16, 2019 at 1:07 AM Stefan Wolf wrote:
>
> Hello all,
>
>
>
> after a powerloss the hosted engine won’t start up anymore.
>
> I ‘ve the current ovirt installed.
>
> Storage is glusterfs und it is up and running
>
>
>
> It is trying to start up hosted engine but it does not work, but I
On Tue, Apr 16, 2019 at 1:42 PM wrote:
>
> Hello
>
> I would like some suggestions on what type of solution with Gluster i should
> use.
>
> I have 4 hosts with 3 disks each, i want to user as much space as possible
> but also some redundancy, like raid5 or 6
> The 4 hosts are running oVirt on
On Tue, Apr 16, 2019 at 1:39 PM Leo David wrote:
>
> Hi Everyone,
> I have wrongly configured the main gluster volume ( 12 identical 1tb ssd
> disks, replica 3 distributed-replicated, across 6 nodes - 2 per node ) with
> arbiter one.
> Oviously I am wasting storage space in this scenario with
On Wed, Apr 3, 2019 at 5:33 PM Николаев Алексей
wrote:
>
> Hi comminuty!
>
> I have issue like this https://bugzilla.redhat.com/show_bug.cgi?id=1506373 on
> ovirt-engine 4.2.8.2-1.el7.
>
> Description of problem:
> VM disk left in LOCKED state when added.
>
> Version-Release number of selected
On Wed, Apr 10, 2019 at 8:51 PM wrote:
>
> Is it possible to change a self-hosted engine's storage domain's settings?
>
> I setup a 3-node ovirt + gluster cluster with a dedicated 'engine' storage
> domain. I can see through the administration portal that the engine storage
> domain is using a
On Wed, Apr 10, 2019 at 1:45 AM Ricardo Alonso wrote:
>
> After installing the second host via the web gui (4.3.2.1-1.el7), it fails to
> activate telling that wasn't possible to connect to the storage pool default
> (glusterfs). Those are the logs:
>
> vdsm.log
>
> 2019-04-09 15:54:07,409-0400
>
Any calls to "START connectStorageServer" in vdsm.log?
> Should I perform an "engine-cleanup", delete lvms from Cockpit and start it
> all over ?
I doubt if that would resolve issue since you did clean up files from the mount.
> Did anyone succes
Is it possible you have not cleared the gluster volume between installs?
What's the corresponding error in vdsm.log?
On Tue, Apr 2, 2019 at 4:07 PM Leo David wrote:
>
> And there it is the last lines on the ansible_create_storage_domain log:
>
> 2019-04-02 10:53:49,139+0100 DEBUG var changed:
On Tue, Apr 2, 2019 at 12:07 PM Sandro Bonazzola
wrote:
> Thanks to the 143 participants to oVirt Survey 2019!
> The survey is now closed and results are publicly available at
> https://bit.ly/2JYlI7U
> We'll analyze collected data in order to improve oVirt thanks to your
> feedback.
>
> As a
On Fri, Mar 29, 2019 at 6:02 PM wrote:
>
> Hi,
>
> Any help?
>
> Thanks
>
> José
>
>
> From: supo...@logicworks.pt
> To: "users"
> Sent: Wednesday, March 27, 2019 11:21:41 AM
> Subject: Actual size bigger than virtual size
>
> Hi,
>
> I have an all in one ovirt
On Fri, Mar 29, 2019 at 3:29 AM Arsène Gschwind
wrote:
> On Thu, 2019-03-28 at 12:18 +, Arsène Gschwind wrote:
>
> On Wed, 2019-03-27 at 12:19 +0530, Sahina Bose wrote:
>
> On Wed, Mar 27, 2019 at 1:59 AM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair wrote:
>
> Hi Strahil,
>
> Ok. Looks like sharding should make the resyncs faster.
>
> I searched for more info on it, but couldn't find much.
> I believe it will still have to compare each shard to determine whether there
> are any changes that need
On Wed, Mar 27, 2019 at 1:59 AM Arsène Gschwind
wrote:
>
> On Tue, 2019-03-26 at 18:09 +0530, Sahina Bose wrote:
>
> On Tue, Mar 26, 2019 at 3:00 PM Kaustav Majumder <
>
> kmaju...@redhat.com
>
> > wrote:
>
>
> Let me rephrase
>
>
>
+Krutika Dhananjay and gluster ml
On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote:
>
> Hello,
>
> tl;dr We have disk corruption when doing live storage migration on oVirt
> 4.2 with gluster 3.12.15. Any idea why?
>
> We have a 3-node oVirt cluster that is both compute and gluster-storage.
t;
>> users-le...@ovirt.org
>>
>>
>> Privacy Statement:
>>
>> https://www.ovirt.org/site/privacy-policy/
>>
>>
>> oVirt Code of Conduct:
>>
>> https://www.ovirt.org/community/about/community-guidelines/
>>
>>
>> List
You will first need to restore connectivity between the gluster peers
for heal to work. So restart glusterd on all hosts as Strahil
mentioned, and check if "gluster peer status" returns the other nodes
as connected. If not, please check the glusterd log to see what's
causing the issue. Share the
Can you check the gluster mount logs to check if there are storage
related errors.
For the VM that's paused, check which storage domain and gluster
volume the OS disk is on. For instance, if the name of the gluster
volume is data, check the logs under
Perina - do you know if this is possible?
> Regards,
> Levin
>
>
> On 18/3/2019, 17:40, "Sahina Bose" wrote:
>
> On Sun, Mar 17, 2019 at 12:56 PM wrote:
> >
> > Hi, I had experience two time of 3-node hyper-converged 4.2.8 ovirt
> cluster t
On Sun, Mar 17, 2019 at 12:56 PM wrote:
>
> Hi, I had experience two time of 3-node hyper-converged 4.2.8 ovirt cluster
> total outage due to vdsm reactivate the unresponsive node, and cause the
> multiple glusterfs daemon restart. As a result, all VM was paused and some of
> disk image was
+Denis Chapligin
On Wed, Mar 6, 2019 at 2:03 PM Robert O'Kane wrote:
>
> Hello,
>
> With my first "in Ovirt" made Gluster Storage I am getting some annoying
> Warnings.
>
> On the Hypervisor(s) engine.log :
>
> 2019-03-05 13:07:45,281+01 INFO
>
We do have an updated rpm gluster-ansible-roles. +Sachidananda URS
On Sun, Mar 10, 2019 at 7:00 PM Hesham Ahmed wrote:
>
> sac-gluster-ansible is there and is enabled:
>
> [sac-gluster-ansible]
> enabled=1
> name = Copr repo for gluster-ansible owned by sac
> baseurl =
>
+Gobinda Das +Dhanjal Parth
On Mon, Mar 11, 2019 at 1:42 AM wrote:
>
> Hello I am trying to run a Hyperconverged setup "COnfigure gluster storage
> and ovirt hosted engine", however I get the following error
>
>
Adding gluster ml
On Mon, Mar 4, 2019 at 7:17 AM Guillaume Pavese
wrote:
>
> I got that too so upgraded to gluster6-rc0 nit still, this morning one engine
> brick is down :
>
> [2019-03-04 01:33:22.492206] E [MSGID: 101191]
> [event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to
irtError ('virDomainCreateWithFlags()
failed', dom=self)
libvirtError: Не удалось установить блокировку: На устройстве не
осталось свободного места[2019-02-28
On Fri, Mar 1, 2019 at 1:08 PM Mike Lykov wrote:
>
> 01.03.2019 9:51, Sahina Bose пишет:
> > Any errors in vdsm.log or gluster mo
Any errors in vdsm.log or gluster mount log for this volume?
On Wed, Feb 27, 2019 at 1:07 PM Mike Lykov wrote:
>
>
> Hi all. I have a HCI setup, glusterfs 3.12, ovirt 4.2.7, 4 nodes
>
> Yesterday I see 3 VMs detected by engine as "not responding" (it is marked as
> HA VMs)
> (it all located on
On Wed, Feb 27, 2019 at 4:06 PM Guillaume Pavese
wrote:
>
> Hi, I tried again today to deploy HE on Gluster with oVirt 4.3.1 RC2 on a
> clean Nested environment (no precedent deploy attempts to clean before...).
>
> Gluster was deployed without problem from cockpit.
> I then snapshoted my vms
r
> accsessing the volume.
> Please correct me if this is wrong.
> Have a nice day,
In single instance deployments too, the option ensures all writes
(with o-direct flag) are flushed to disk and not cached.
>
> Leo
>
>
> On Tue, Feb 26, 2019, 08:24 Sahina Bose wrote:
>>
On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara
wrote:
> Hi,
>
> but performance.strict-o-direct is not one of the option enabled by
> gdeploy during installation because it's supposed to give some sort of
> benefit?
>
See
On Mon, Feb 25, 2019 at 2:51 PM matteo fedeli wrote:
>
> oh, ovirt-engine-appliance where I can found?
ovirt-engine-appliance rpm is present in the oVirt repo
(https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/x86_64/ for 4.3)
>
> At the end I wait in total 3 hour (is not too much?) and the
On Thu, Feb 21, 2019 at 8:47 PM wrote:
>
> Hello,
> I have a 3 node ovirt 4.3 cluster that I've setup and using gluster
> (Hyperconverged setup)
> I need to increase the amount of storage and compute so I added a 4th host
> (server4.example.com) if it is possible to expand the amount of bricks
u can log a bug with these logs, that would be great - please use
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS to log the
bug.
>
> Jason aka Tristam
>
>
> On Feb 14, 2019, at 1:12 AM, Sahina Bose wrote:
>
> On Thu, Feb 14, 2019 at 2:39 AM Ron Jerome wrote:
&
This looks like a bug, if you selected jbod but is not reflected in
the generated gdeploy config file. +Gobinda Das ?
On Fri, Feb 22, 2019 at 8:59 PM Sandro Bonazzola wrote:
>
>
>
> Il ven 22 feb 2019, 15:31 matteo fedeli ha scritto:
>>
>> sorry, but I don't understand...
>
>
>
> I added to the
+Gobinda Das +Dhanjal Parth can you please check?
On Fri, Feb 22, 2019 at 11:52 PM Matthew Roth wrote:
>
> I have 3 servers, Node 1 is 3tb /dev/sda, Node 2, 3tb /dev/sdb, node3 3tb
> /dev/sdb
>
> I start the process for gluster deployment. I change node 1 to sda and all
> the other ones to
On Thu, Feb 21, 2019 at 7:47 PM wrote:
>
> Sorry if this seems simple, but trial and error is how I learn. So the
> basics. I installed Node 4.3 on 3 hosts, and was following the setup for
> self-hosted engine. The setup fails when detecting peers and indicates that
> they are already part of
The options set on the gluster volume are tuned for data consistency and
reliability.
Some of the changes that you can try
1. use gfapi - however this will not provide you HA if the server used to
access the gluster volume is down. (the backup-volfile-servers are not used
in case of gfapi). You
You can interrupt and continue from hosted engine setup the next time.
Please download the ovirt-engine-appliance rpm prior to install to
speeden things up.
On Mon, Feb 25, 2019 at 4:56 AM matteo fedeli wrote:
>
> after several attempts I managed to install and deploying the ovel gluster
> but
On Thu, Feb 14, 2019 at 8:24 PM Jayme wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=1677160 doesn't seem relevant
> to me? Is that the correct link?
>
> Like I mentioned in a previous email I'm also having problems with Gluster
> bricks going offline since upgrading to oVirt 4.3
You can edit per host in the cockpit UI if you have non-uniform hosts.
If you still run into issues, please paste the generated gdeploy
config file to check
On Wed, Feb 13, 2019 at 8:54 PM Edward Berger wrote:
>
> I don't believe the wizard followed your wishes if it comes up with 1005gb
> for
On Thu, Feb 14, 2019 at 4:56 AM wrote:
>
> I'm abandoning my production ovirt cluster due to instability. I have a 7
> host cluster running about 300 vms and have been for over a year. It has
> become unstable over the past three days. I have random hosts both, compute
> and storage
On Thu, Feb 14, 2019 at 2:39 AM Ron Jerome wrote:
>
>
> >
> > Can you be more specific? What things did you see, and did you report bugs?
>
> I've got this one: https://bugzilla.redhat.com/show_bug.cgi?id=1649054
> and this one: https://bugzilla.redhat.com/show_bug.cgi?id=1651246
> and I've got
d 300MBps, the VM is pushing 280MBps average. Both using XFS.
>
> So why is ovirt's guest disc performance (native and gluster) so poor? Why
> is it consistently giving me about 1/10th to 1/80th of the hosts disc
> throughput?
>
>
>
>
> On Mon, Feb 11, 2019 at 5:01 AM Sahina Bose w
On Tue, Feb 12, 2019 at 10:51 AM Endre Karlson
wrote:
> It's a upgrade from 4.2.x < latest version of 4.2 series. I upgraded by
> adding the 4.3 repo and doing the steps on the upgrade guide page
> https://www.ovirt.org/release/4.3.0/#centos--rhel
>
Seems like you're running into
On Wed, Feb 6, 2019 at 10:45 PM feral wrote:
> On that note, this was already reported several times a few months back,
> but apparently was fixed in gdeploy-2.0.2-29.el7rhgs.noarch. I'm guessing
> ovirt-node-4.3 just hasn't updated to that version yet?
>
+Niels de Vos +Sachidananda URS
Any
On Fri, Feb 8, 2019 at 11:31 AM Aravinda wrote:
>
> Looks like Python 3 porting issue. I will work on the fix soon. Thanks
Do we have a bug in gluster to track this?
>
>
> On Thu, 2019-02-07 at 13:27 +0530, Sahina Bose wrote:
> > +Aravinda Vishwanathapura Krishna Murth
On Wed, Feb 6, 2019 at 4:17 PM Jorick Astrego wrote:
> Hi again,
>
> When using the option "Optimize for Virt store", I get the following error:
>
> 2019-02-06 10:25:02,353+01 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>
+Aravinda Vishwanathapura Krishna Murthy can you take a look? oVirt
4.3 has Gluster 5
On Wed, Feb 6, 2019 at 7:35 PM Edward Berger wrote:
>
> I upgraded some nodes from 4.28 to 4.3 and now when I look at the cockpit
> "services"
> tab I see a red failure for Gluster Events Notifier and clicking
On Thu, Feb 7, 2019 at 8:57 AM Edward Berger wrote:
>
> I'm seeing migration failures for the hosted-engine VM from a 4.28 node to a
> 4.30 node so I can complete the node upgrades.
You may be running into
https://bugzilla.redhat.com/show_bug.cgi?id=1641798. Can you check the
version of libvirt
+Sachidananda URS to review user request about systemd mount files
On Tue, Feb 5, 2019 at 10:22 PM feral wrote:
>
> Using SystemD makes way more sense to me. I was just trying to use ovirt-node
> as it was ... intended? Mainly because I have no idea how it all works yet,
> so I've been trying
1 - 100 of 525 matches
Mail list logo