Hi all,
I have a replica 3 with 1 arbiter.
I see the last days that one file at a volume is always showing as needing
healing:
gluster volume heal vms info
Brick gluster0:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster1:/gluster/vms/brick
Status: Connected
Number of
I am using gluster 3.8.12, the default on Centos 7.3
(I will update to 3.10 at some moment)
On Sun, Sep 17, 2017 at 11:30 AM, Alex K <rightkickt...@gmail.com> wrote:
> Hi all,
>
> I have a replica 3 with 1 arbiter.
>
> I see the last days that one file at a volume is alwa
Found that this specific gfid was not pointing to any file.
Checked this with gfid resolver script
https://gist.github.com/semiosis/4392640
Moved the gfid out of gluster and all ok now.
Thanx,
Alex
On Sun, Sep 17, 2017 at 11:31 AM, Alex K <rightkickt...@gmail.com> wrote:
> I am usin
You could trigger a chmod on log rotation.
Alex
On Sep 21, 2017 06:45, "Kaleb S. KEITHLEY" wrote:
> On 09/18/2017 09:22 PM, ABHISHEK PALIWAL wrote:
> > Any suggestion would be appreciated...
> >
> > On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" >
Wouldn't a simple chmod 644 logfile suffice? This will give read
permissions to all.
Otherwise you could change the group ownership (chgroup), give read
permissuons to this group (640) then make the users a member of this group.
Alex
On Sep 20, 2017 2:37 PM, "ABHISHEK PALIWAL"
I would first check that you have the same version on both servers.
On Oct 18, 2017 6:46 PM, "Ngo Leung" wrote:
> Dear Sir / Madam
>
>
>
> I had been using Glusterfs of both nodes, and it is under in distribute
> mode. But I cannot use all of the gluster commands at one
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get
In case you do not need any data from the brick you may append "force" at
the command, as the error mentions
Alex
On Nov 15, 2017 11:49, "Rudi Ahlers" wrote:
> Hi,
>
> I am trying to remove a brick, from a server which is no longer part of
> the gluster pool, but I keep
Yes, I would be interested to hear more on the findings. Let us know once
you have them.
On Nov 1, 2017 13:10, "Shyam Ranganathan" wrote:
> On 10/31/2017 08:36 PM, Ben Turner wrote:
>
>> * Erasure coded volumes with sharding - seen as a good fit for VM disk
>>> storage
>>>
engine split-brain latest-mtime
gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8
Healing gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8 failed:Operation not
permitted.
Volume heal failed.
I will appreciate any help.
thanx,
Alex
On Mon, Feb 5, 2018 at 1:11 PM, Alex K <rightkickt...@gmail.com> wrote:
&g
Hi all,
I have a split brain issue and have the following situation:
gluster volume heal engine info split-brain
Brick gluster0:/gluster/engine/brick
/ad1f38d7-36df-4cee-a092-ab0ce1f98ce9/ha_agent
Status: Connected
Number of entries in split-brain: 1
Brick gluster1:/gluster/engine/brick
n you give the output of stat & getfattr -d -m . -e hex
> from both the bricks.
>
> Regards,
> Karthik
>
> On Mon, Feb 5, 2018 at 5:03 PM, Alex K <rightkickt...@gmail.com> wrote:
>
>> After stoping/starting the volume I have:
>>
>> gluster volume
Hi,
Have you checked for any file system errors on the brick mount point?
I once was facing weird io errors and xfs_repair fixed the issue.
What about the heal? Does it report any pending heals?
On Feb 15, 2018 14:20, "Dave Sherohman" wrote:
> Well, it looks like I've
When upgrading ovirt to use gluster 3.12 I am experiencing memory leaks and
every week have to put hosts in maintenance and activate again to refresh
memory. Still have this issue and hoping for a bug fix on next releases. I
recall a gluster bug already open for this.
On Aug 2, 2018 18:02,
upgraded?
>
> Regards,
> Nithya
>
> On 3 August 2018 at 16:56, Alex K wrote:
>
>> Hi all,
>>
>> I am using gluster 3.12.9-1 on ovirt 4.1.9 and I have observed consistent
>> high memory use which at some point renders the hosts unresponsive. This
>> b
Hi all,
I am using gluster 3.12.9-1 on ovirt 4.1.9 and I have observed consistent
high memory use which at some point renders the hosts unresponsive. This
behavior is observed also while using 3.12.11-1 with ovirt 4.2.5. I did not
have this issue prior to upgrading gluster.
I have seen a
ug 3, 2018 at 6:15 PM Alex K wrote:
> >
> > Hi,
> >
> > I was using 3.8.12-1 up to 3.8.15-2. I did not have issue with these
> versions.
> > I still have systems running with those with no such memory leaks.
> >
> > Thanx,
> > Alex
>
Hi,
On Fri, Aug 24, 2018, 21:45 Mark Connor wrote:
> Wondering if there is a best practice for volume creation. I don't see
> this information in the documentation. For example.
> I have a 10 node distribute-replicate setup with one large xfs filesystem
> mounted on each node.
>
> Is it OK for
Hi all,
I have a gluster replica 3 setup with several volumes used for ovirt.
One of the volumes was set with NFS enabled.
After upgrading to 3.12.13 the NFS service was disabled at the gluster
volume.
I ran the command to enable the NFS and received the following:
gluster volume set iso
6/2018 09:33 AM, Alex K wrote:
> > Hi all,
> >
> > I have a gluster replica 3 setup with several volumes used for ovirt.
> > One of the volumes was set with NFS enabled.
> > After upgrading to 3.12.13 the NFS service was disabled at the gluster
> > volume.
>
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomk...@mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
&g
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down,
I would go with at least 4 HDDs per host in RAID 10. Then focus on network
performance where bottleneck usualy is for gluster.
On Sat, Mar 24, 2018, 00:44 Jayme wrote:
> Do you feel that SSDs are worth the extra cost or am I better off using
> regular HDDs? I'm looking for
What is your gluster setup? Please share volume details where vms ate
stored. It could be that the slow host is having arbiter volume.
Alex
On Feb 26, 2018 13:46, "Ryan Wilkinson" wrote:
> Here is info. about the Raid controllers. Doesn't seem to be the culprit.
>
> Slow
cted at same switch, gluster was ok.
> Best Regards
> Strahil Nikolov
> On Mar 22, 2019 18:42, Alex K wrote:
>
> Hi all,
>
> I had the opportunity to test the setup on actual hardware, as I managed
> to arrange for a downtime at customer.
>
> The results were that, w
arate networks will be used as
> active-backup or active-active.
>
> Someone more experienced should jump in.
>
> Best Regards,
> Strahil Nikolov
> On Feb 25, 2019 12:43, Alex K wrote:
>
> Hi All,
>
> I was asking if it is possible to have the two separate cables conne
hat_gluster_storage/3.4/html/administration_guide/network4
>
> Preferred bonding mode for Red Hat Gluster Storage client is mode 6
> (balance-alb), this allows client to transmit writes in parallel on
> separate NICs much of the time.
>
> Regards,
>
> Jorick Astrego
> On 2/25/19 5:41 AM, Dm
Hi all,
I have a replica 3 setup where each server was configured with a dual
interfaces in mode 6 bonding. All cables were connected to one common
network switch.
To add redundancy to the switch, and avoid being a single point of failure,
I connected each second cable of each server to a second
but gluster volumes were down due to connectivity issues being reported
(endpoint is not connected). systemctl restart network usually resolved the
gluster connectivity issue. This was regardless of the scenario (interlink
or not). I will need to do some more tests.
On Tue, Feb 26, 2019 at 4:14 PM Alex K
error mean?
I don't have the hardware anymore available for testing and I will try to
reproduce on virtual env.
Thanx
Alex
On Mon, Mar 18, 2019 at 12:52 PM Alex K wrote:
> Performed some tests simulating the setup on OVS.
> When using mode 6 I had mixed results for both scenarios (see
I use gluster on top of lvm for several years without any issues.
On Mon, Apr 8, 2019, 10:43 Felix Kölzow wrote:
> Thank you very much for your response.
>
> I fully agree that using LVM has great advantages. Maybe there is a
> misunderstanding,
>
> but I really got the recommendation to not
don't see any LVM issues so far.
>
Neither me
> Best Regards,
> Strahil Nikolov
> On Apr 8, 2019 21:15, Alex K wrote:
>
> I use gluster on top of lvm for several years without any issues.
>
> On Mon, Apr 8, 2019, 10:43 Felix Kölzow wrote:
>
> Thank you very mu
Hi
On Fri, Jun 28, 2019, 17:54 Marcus Schopen wrote:
> Hi,
>
> does anyone have experience with gluster in KVM environments? I would
> like to hold qcow2 images of a KVM host with a second KVM host in sync.
> Unfortunately, shared storage is not available to me, only the
> two KVM hosts. In
On Thu, Jul 18, 2019, 11:06 Sudheer Singh
wrote:
> Hi ,
>
> I was doing perf testing and found out fuse mount much slower than NFS
> mount. I was curious to know what community recommends, mount volumes as
> fuse or NFS?
>
You may need to consider libgfapi instead of fuse. I have seen better
tion; performance.strict-o-direct: on. but both shared and none-sharded
> work in that case too.
> none the less i would advise to run any database with strict-o-direct on.
>
Thanx Olaf for your feedback. Appreciated
>
> Best Olaf
>
>
> Op ma 12 okt. 2020 om 20:10 schreef Alex K
On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato
wrote:
> Il 10/10/20 16:53, Alex K ha scritto:
>
> > Reading from the docs i see that this is not recommended?
> IIUC the risk of having partially-unsynced data is is too high.
> DB replication is not easy to configure because it
rsion is much more scalable and
simplified.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
>
> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
>
>
> On Mon, Oct 12, 2020 at 9:
Hi,
I am considering to setup database services on top GlusterFS.
The databases would be mariadb or postgresql. I would add later influxdb.
I was wondering if it is a good idea to use glusterfs as a database
storage, so as to achieve simple high availability without configuring DB
replication. I
On Tue, Oct 13, 2020, 23:39 Gionatan Danti wrote:
> Il 2020-10-13 21:16 Strahil Nikolov ha scritto:
> > At least it is a good start point.
>
> This can also be an interesting read:
>
> https://docs.openshift.com/container-platform/3.11/scaling_performance/optimizing_on_glusterfs_storage.html
rmance.read-after-open=yes
>
> At least it is a good start point.
>
Interesting indeed! Thanx
>
> Best Regards,
> Strahil Nikolov
>
>
> В вторник, 13 октомври 2020 г., 21:42:28 Гринуич+3, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
>
gt; second thought , DBs know their data file names , so even 1 file per table
> will work quite OK.
>
> But you will need a lot of testing before putting something into
> production.
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник,
On Mon, Nov 9, 2020, 14:33 Kaleb Keithley wrote:
>
>
> https://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
> On Mon, Nov 9, 2020 at 3:08 AM Alex K wrote:
>
>>
>> https://docs.gluster.org/en/latest/Administrator%20Gu
luster volume set ganesha.enable on
I might be missing some step.
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 9 ноември 2020 г., 10:08:44 Гринуич+2, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
> Hi all,
&
Hi all,
I would like to export gluster volume (which is a replica 3) with NFS so as
to use for persistent container storage. I can directly use gluster storage
plugin from docker containers though it seems that this approach uses FUSE
mount. I have read that nfs-ganesha i using libgfapi which
Hi friends,
I am using gluster for some years though only as a file storage.
I was wandering what is the status of block storage through gluster.
I see the following project:
https://github.com/gluster/gluster-block
Is this still receiving updates and could be used to production or is it
t; Strahil Nikolov
>
>
>
>
>
>
> В четвъртък, 5 ноември 2020 г., 16:24:10 Гринуич+2, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
> Hi friends,
>
> I am using gluster for some years though only as a file storage.
> I was wandering w
On Fri, Aug 27, 2021, 18:25 Dario Lesca wrote:
> Thanks Andreas,
> I will follow your suggestion to check the order of services via systemd
> For now, I can't substitute RedHat (I use Rocky Linux, not RedHat) with
> Debian or Ubuntu.
>
I would go with the systemd route for better control of the
47 matches
Mail list logo