this thread;
https://lists.gluster.org/pipermail/gluster-users/2018-December/035460.html
Maybe he suffers from the same issue.
Best Olaf
Op wo 19 dec. 2018 om 21:56 schreef Olaf Buitelaar :
> Dear All,
>
> It appears i've a stale file in one of the volumes, on 2 files. These
> files are
om 20:56 schreef Olaf Buitelaar :
> Dear All,
>
> till now a selected group of VM's still seem to produce new stale file's
> and getting paused due to this.
> I've not updated gluster recently, however i did change the op version
> from 31200 to 31202 about a week before this issu
@Krutika if you need any further information, please let me know.
Thanks Olaf
Op vr 4 jan. 2019 om 07:51 schreef Nithya Balachandran :
> Adding Krutika.
>
> On Wed, 2 Jan 2019 at 20:56, Olaf Buitelaar
> wrote:
>
>> Hi Nithya,
>>
>> Thank you for your reply.
.
If there are other criteria needed to identify a stale file handle, i would
like to hear that.
If this is a viable and safe operation to do of course.
Thanks Olaf
Op do 20 dec. 2018 om 13:43 schreef Olaf Buitelaar :
> Dear All,
>
> I figured it out, it appeared to be the exact same issue as
Op wo 2 jan. 2019 om 14:20 schreef Nithya Balachandran :
>
>
> On Mon, 31 Dec 2018 at 01:27, Olaf Buitelaar
> wrote:
>
>> Dear All,
>>
>> till now a selected group of VM's still seem to produce new stale file's
>> and getting paused due to this.
>> I've
Dear All,
It appears i've a stale file in one of the volumes, on 2 files. These files
are qemu images (1 raw and 1 qcow2).
I'll just focus on 1 file since the situation on the other seems the same.
The VM get's paused more or less directly after being booted with error;
[2018-12-18
9-04-01
> 10:23:21.752432]
>
> I will backport the same.
> Thanks,
> Mohit Agrawal
>
> On Wed, Apr 3, 2019 at 3:58 PM Olaf Buitelaar
> wrote:
>
>> Dear Mohit,
>>
>> Sorry i thought Krutika was referring to the ovirt-kube brick logs. due
>> the la
branch for now.
Any chance this will be back ported?
Thanks Olaf
Op wo 19 jun. 2019 om 17:57 schreef Atin Mukherjee :
> Please see - https://bugzilla.redhat.com/show_bug.cgi?id=1655827
>
>
>
> On Wed, Jun 19, 2019 at 5:52 PM Olaf Buitelaar
> wrote:
>
>> Dear
Dear All,
Has anybody seen this error on gluster 5.6;
[glusterd-rpc-ops.c:1388:__glusterd_commit_op_cbk]
(-->/lib64/libgfrpc.so.0(+0xec60) [0x7fbfb7801c60]
-->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0x79b7a)
[0x7fbfac50db7a]
-->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0x77393)
s wrong here.
>
> On Wed, Jun 19, 2019 at 10:31 PM Olaf Buitelaar
> wrote:
>
>> Hi Atin,
>>
>> Thank you for pointing out this bug report, however no rebalancing task
>> was running during this event. So maybe something else is causing this?
>> According the
rebalancing tasks were running, also checked the last
time that was @2019-06-08 21:13:02 and was completed successfully
Hopefully this is additional useful info.
Thanks Olaf
Op do 20 jun. 2019 om 14:52 schreef Olaf Buitelaar :
> Hi Sanju,
>
> you can download the coredump here;
> http://edg
e issues I've run
> into so far as a new gluster user seem like they're related to shards.
>
> Thanks,
> Tim
> --
> *From:* Olaf Buitelaar
> *Sent:* Wednesday, November 27, 2019 9:50 AM
> *To:* Timothy Orme
> *Cc:* gluster-users
> *Subjec
Hi Tim,
i've been suffering from this also for a long time, not sure if it's exact
the same situation since your setup is different. But it seems similar.
i've filed this bug report;
https://bugzilla.redhat.com/show_bug.cgi?id=1732961 for which you might be
able to enrich.
To solve the stale
arding.
>
> Best Regards,
> Strahil Nikolov
> On Nov 27, 2019 20:55, Olaf Buitelaar wrote:
>
> Hi Tim,
>
> That issue also seems to point to a stale file. Best i suppose is first to
> determine if you indeed have the same shard on different sub-volumes, where
> on one of
t; On January 27, 2020 11:49:08 PM GMT+02:00, Olaf Buitelaar <
> olaf.buitel...@gmail.com> wrote:
> >Dear Gluster users,
> >
> >i'm a bit at a los here, and any help would be appreciated.
> >
> >I've lost a couple, since the disks suffered from severe XFS error
Dear Gluster users,
i'm a bit at a los here, and any help would be appreciated.
I've lost a couple, since the disks suffered from severe XFS error's and of
virtual machines and some won't boot because they can't resolve the size of
the image as reported by vdsm:
"VM kube-large-01 is down with
>
> Hi Strahil,
>
in fact i'm running more bricks per host, around 12 bricks per host.
Nonetheless the feature doesn't seem to work really for me, since it's
starting a separate glusterfsd processes for each brick anyway, actually
after a reboot or restart of glusterd, multiple glusterfsd for the
Hi Robert,
there were serveral issues with ownership in ovirt, for example see;
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Maybe you're encountering these issues during the upgrade process. Also if
you're using gluster as backend storage, there might be some permission
issues in the 6.7
Hi Alex,
I've been running databases both directly and indirectly through qemu
images vms (managed by oVirt), and since the recent gluster versions (6+,
haven't tested 7-8) I'm generally happy with the stability. I'm running
mostly write intensive workloads.
For mariadb, any gluster volume seems
Hi Andreas,
Only step 5 should be enough. Remounting would only be required, when all
the bricks change their destination.
Best Olaf
Op di 13 okt. 2020 om 11:20 schreef Andreas Kirbach :
> Hi,
>
> I'd like to change the mount point of an arbiter brick in a 1 x (2 + 1)
> = 3 replicated volume,
it, please let me know.
Thanks for your assistance.
Best Olaf
Op do 26 nov. 2020 om 11:53 schreef Ravishankar N :
>
> On 26/11/20 4:00 pm, Olaf Buitelaar wrote:
>
> Hi Ravi,
>
> I could try that, but i can only try a setup on VM's, and will not be able
> to setup an environmen
.
Is there anything further i could check/do to get to the bottom of this?
Thanks Olaf
Op wo 25 nov. 2020 om 14:14 schreef Ravishankar N :
>
> On 25/11/20 5:50 pm, Olaf Buitelaar wrote:
>
> Hi Ashish,
>
> Thank you for looking into this. I indeed also suspect it has something
> tod
on latest version and the start
> your work and see if there is any high usage of memory .
> That way it will also be easier to debug this issue.
>
> ---
> Ashish
>
> --
> *From: *"Olaf Buitelaar"
> *To: *"gluster-users"
>
gt; Thanks Olaf
>
> Op wo 25 nov. 2020 om 14:14 schreef Ravishankar N >:
>
>>
>> On 25/11/20 5:50 pm, Olaf Buitelaar wrote:
>>
>> Hi Ashish,
>>
>> Thank you for looking into this. I indeed also suspect it has something
>> todo with the 7.X client, b
Hi WK,
i believe gluster 7 just received it's latest maintenance patch @version
7.9. so that's a dead end from there on.
I'm not sure if you can skip a whole version going straight to 8.
Unfortunately I cannot answer your other questions. Hopefully somebody else
could.
Best Olaf
Op do 17
It is in their release notes;
https://docs.gluster.org/en/latest/release-notes/7.9/
Ok, we've usually good experiences with inplace update's. But taking the
safe route is always a good idea!
Op do 17 dec. 2020 om 19:49 schreef WK :
>
> On 12/17/2020 10:13 AM, Olaf Buitelaar wrote:
>
Hi,
please see;
https://docs.gluster.org/en/latest/Administrator-Guide/Managing-Volumes/
Gluster offers online expansion of the volume you can add bricks and/or
nodes without taking mariadb offline if you want.
just use the; gluster volume add-brick [vol] [bricks] (bricks must be added
Hi Ravi,
I would like to avoid an offline upgrade, since it would disrupt quite some
services.
Is there anything further I can investigate or do?
Thanks Olaf
Op di 3 nov. 2020 om 12:17 schreef Ravishankar N :
>
> On 02/11/20 8:35 pm, Olaf Buitelaar wrote:
>
> Dear Gluster users,
&g
Dear Gluster users,
I'm trying to upgrade from gluster 6.10 to 7.8, i've currently tried this
on 2 hosts, but on both the Self-Heal Daemon refuses to start.
It could be because not all not are updated yet, but i'm a bit hesitant to
continue, without the Self-Heal Daemon running.
I'm not using
Dear Schlick,
It's indeed dropped, I think it's in favor of more native options. The go
to replacement is lvm-cache (
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_cache_volume_creation
).
This places just an amount of SSD
Hi Jiri,
your probleem looks pretty similar to mine, see;
https://lists.gluster.org/pipermail/gluster-users/2021-February/039134.html
Any chance you also see the xfs errors in de brick logs?
For me the situation improved once i disabled brick multiplexing, but i
don't see that in your volume
schreef Jiří Sléžka :
> Hi Olaf,
>
> thanks for reply.
>
> On 7/8/21 3:29 PM, Olaf Buitelaar wrote:
> > Hi Jiri,
> >
> > your probleem looks pretty similar to mine, see;
> >
> https://lists.gluster.org/pipermail/gluster-users/2021-February/039134.html
> >
Hi Erik,
Thanks for sharing your unique use-case and solution. It was very
interesting to read your write-up.
I agree with your point; " * Growing volumes, replacing bricks, and
replacing servers work.
However, the process is very involved and quirky for us. I have."
in your use-case 1
Dear Users,
Somehow the brick processes seem to crash on xfs filesystem error's. It
seems it depends on the way the gluster process is started. Also gluster
sends on this occurrence a message to the console, informing the process
will go down, however it doesn't really seem to go down;
M [MSGID:
Hi Tom,
I think you could use the replace-brick sub-command; like: "gluster volume
replace-brick :/bricks/0/gv01
:/bricks/0/abc-gv01 commit force"
see;
https://docs.gluster.org/en/latest/Administrator-Guide/Managing-Volumes/#replace-brick
Kind regards,
Olaf
Op di 4 jan. 2022 om 12:50
] sdz1[7] sdv1[3] sdw1[4]
> sdaa1[8] sdy1[6]
> 68364119040 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9]
> [U]
> bitmap: 0/73 pages [0KB], 65536KB chunk
>
> Best regards
> Peter
>
> 25.03.2022, 12:36, "Olaf Buitelaar" :
>
> Hi Peter,
Hi Stefan,
I think there are 3 routes;
1. you can use the reset-brick command;
https://docs.gluster.org/en/latest/release-notes/3.9.0/#introducing-reset-brick-command
2. copy the files, make sure the gluster processes are stopped before you
start the copy, and make sure to include all included
m 00:53 schrieb Strahil Nikolov:
> > With replica volumes I prefer to avoid 'reset-brick' in favor of
> > 'remove-brick replica ' and once you replace the ssds
> > 'add-brick replica ' + gluster volume heal full
> >
> > Best Regards,
> > Strahil Nikolov
> &
Hi Strahil,
interesting, would you happen to know how you could let oVirt mount it with
this flag?
Thanks Olaf
Op ma 4 apr. 2022 om 22:05 schreef Strahil Nikolov :
> * it instructs the kernel to skip 'getfattr'
>
> On Mon, Apr 4, 2022 at 23:04, Strahil Nikolov
> wrote:
> I agree. Also it has
Hi Peter,
I see your raid array is rebuilding, could it be your xfs needs a repair,
using xfs_repair?
did you try running gluster v hdd start force?
Kind regards,
Olaf
Op do 24 mrt. 2022 om 15:54 schreef Peter Schmidt <
peterschmidt18...@yandex.com>:
> Hello everyone,
>
> I'm running an
40 matches
Mail list logo