Yes, I'm on 10.3 on a brand new installation (i.e.: no upgrade or
whatsoever)
Ok, I've finally read how to get core dumps on Debian. Soft limits are 0 by
default, so no core dumps can be generated. I've set up soft ulimit to
unlimited and killed a test process with a SIGSEGV signal, then I was
I'm not so sure the problem is with sharding. Basically it's saying that
seek is not supported, which means that something between shard and the
bricks doesn't support it. DHT didn't support seek before 10.3, but if I'm
not wrong you are already using 10.3, so the message is weird. But in any
case
Well, that last more time, but it crashed once again, same node, same
mountpoint... fortunately, I've moved preventively all the VMs to the
underlying ZFS filesystem those past days, so none of them have been
affected this time...
dmesg show this
[2022-12-01 15:49:54] INFO: task
I did also notice about that loop0... AFAIK, I wasn't using any loop
device, at least consciously.
After looking for the same messages at the other gluster/proxmox nodes, I
saw no trace of it.
Then I saw on that node, there is a single LXC container, which disk is
living on the glusterfs, and
What is "loop0" it seems it's having some issue. Does it point to a Gluster
file ?
I also see that there's an io_uring thread in D state. If that one belongs
to Gluster, it may explain why systemd was unable to generate a core dump
(all threads need to be stopped to generate a core dump, but a
Well, just happened again, the same server, the same mountpoint.
I'm unable to get the core dumps, coredumpctl says there are no core dumps,
it would be funny if I wasn't the one suffering it, but systemd-coredump
service crashed as well
● systemd-coredump@0-3199871-0.service - Process Core Dump
I've taken a look into all possible places they should be, and I couldn't
find it anywhere. Some people say the dump file is generated where the
application is running... well, I don't know where to look then, and I hope
they hadn't been generated on the failed mountpoint.
As Debian 11 has
The crash seems related to some problem in ec xlator, but I don't have
enough information to determine what it is. The crash should have generated
a core dump somewhere in the system (I don't know where Debian keeps the
core dumps). If you find it, you should be able to open it using this
command
Hi Xavi,
The OS is Debian 11 with the proxmox kernel. Gluster packages are the
official from gluster.org (
https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/)
The system logs showed no other issues by the time of the crash, no OOM
kill or whatsoever, and no other process
Hi Angel,
On Mon, Nov 21, 2022 at 2:33 PM Angel Docampo
wrote:
> Sorry for necrobumping this, but this morning I've suffered this on my
> Proxmox + GlusterFS cluster. In the log I can see this
>
> [2022-11-21 07:38:00.213620 +] I [MSGID: 133017]
> [shard.c:7275:shard_seek] 11-vmdata-shard:
Sorry for necrobumping this, but this morning I've suffered this on my
Proxmox + GlusterFS cluster. In the log I can see this
[2022-11-21 07:38:00.213620 +] I [MSGID: 133017]
[shard.c:7275:shard_seek] 11-vmdata-shard: seek called on
fbc063cb-874e-475d-b585-f89
f7518acdd. [Operation not
Hi Xavi,
Thank you for that information. We'll look at upgrading it.
On Fri, 12 Mar 2021 at 05:20, Xavi Hernandez wrote:
> Hi David,
>
> with so little information it's hard to tell, but given that there are
> several OPEN and UNLINK operations, it could be related to an already fixed
> bug
Hi David,
with so little information it's hard to tell, but given that there are
several OPEN and UNLINK operations, it could be related to an already fixed
bug (in recent versions) in open-behind.
You can try disabling open-behind with this command:
# gluster volume set open-behind off
Hello,
We have a GlusterFS 5.13 server which also mounts itself with the native
FUSE client. Recently the FUSE mount crashed and we found the following in
the syslog. There isn't anything logged in mnt-glusterfs.log for that time.
After killing all processes with a file handle open on the
14 matches
Mail list logo