[Gluster-users] Evergrowing distributed volume question

2021-03-19 Thread nux
Hello, A while ago I attempted and failed to maintain an "evergrowing" storage solution based on GlusterFS. I was relying on a distributed non-replicated volume to host backups and so on, in the idea that when it was close to full I would just add another brick (server) and keep it going like

Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-19 Thread Ewen Chan
Erik: Thank you for sharing your insights in terms of how Gluster is used, in a professional, production environment (or at least how HPE is using it, either internally, and/or for your clients). I really appreciated reading this. I am just a home lab user and I have a very tiny 4-node micro

Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-19 Thread Erik Jacobson
> - Gluster sizing > * We typically state compute nodes per leader but this is not for > gluster per-se. Squashfs image objects are very efficient and > probably would be fine for 2k nodes per leader. Leader nodes provide > other services including console logs, system logs, and

Re: [Gluster-users] Volume not healing

2021-03-19 Thread Diego Zuccato
Il 19/03/21 13:17, Strahil Nikolov ha scritto: > find /FUSE/mountpoint -exec stat {} \; Running it now (redirecting stdout to /dev/null). It's finding quite a lot of "no such file or directory" errors. -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum

[Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-19 Thread Erik Jacobson
A while back I was asked to make a blog or something similar to discuss the use cases the team I work on (HPCM cluster management) at HPE. If you are not interested in reading about what I'm up to, just delete this and move on. I really don't have a public blogging mechanism so I'll just

Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-19 Thread Erik Jacobson
> But I've also tested using tmpfs (allocating half of the RAM per compute node) > and exporting that as a distributed stripped GlusterFS volume over NFS over > RDMA to the 100 Gbps IB network so that the "ramdrives" can be used as a high > speed "scratch disk space" that doesn't have the write

Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-19 Thread Ewen Chan
Erik: My apologies for not being more clear originally. What I meant to say was that I was using GlusterFS for HPC jobs because my understanding is that most HPC environments often or tend to use, for example, NVMe SSDs for their high speed storage tier, but even those have a finite write

Re: [Gluster-users] Brick offline after upgrade

2021-03-19 Thread David Cunningham
Hi Strahil, It's as follows. Do you see anything unusual? Thanks. root@caes8:~# ls -al /var/lib/glusterd/vols/gvol0/ total 52 drwxr-xr-x 3 root root 4096 Mar 18 17:06 . drwxr-xr-x 3 root root 4096 Jul 17 2018 .. drwxr-xr-x 2 root root 4096 Mar 18 17:06 bricks -rw--- 1 root root 16 Mar 18

[Gluster-users] Volume not healing

2021-03-19 Thread Diego Zuccato
Hello all. I have a "problematic" volume. It was Rep3a1 with a dedicated VM for the arbiters. Too bad I understimated RAM needs and the arbiters VM crashed frequently for OOM (had just 8GB allocated). Even the other two nodes sometimes crashed, too, during a remove-brick operation (other

[Gluster-users] Geo-rep: Version upgrade to version 8 and above

2021-03-19 Thread Shwetha Acharya
Hi all, With version 8, we have made certain changes to the directory structure of changelog files in gluster geo-replication. Thus, *before the upgrade, we need to execute the upgrade script

Re: [Gluster-users] Volume not healing

2021-03-19 Thread Diego Zuccato
Il 19/03/21 11:06, Diego Zuccato ha scritto: > I tried to run "gluster v heal BigVol info summary" and got quite a high > count of entries to be healed on some bricks: > # gluster v heal BigVol info summary|grep pending|grep -v ' 0$' > Number of entries in heal pending: 41 > Number of entries in