to think about is a firm w/ a remote office connected via a
VPN -- if you can cut nearly all the read traffic from the VPN, then you
see a great boost in performance.
Or maybe I missed something...
--
Diego Zuccato
Servizi Informatici
Dip. di Astronomia - Università di Bologna
Via Ranzani, 1
scattered (read
performance using multiple files ~ 16x vs 2x with a single disk per
host). Moreover it's easier to extend.
But why ZFS instead of XFS ? In my experience it's heavier.
PS: add a third host ASAP, at least for arbiter volumes (replica 3
arbiter 1). Split brain can be a real pain to fix!
-
, I would prefer the 'replica 3 arbiter 1' approach as it doesn't take so
> much space and extending will require only 2 data disks .
And you won't have split-brain issues that are a mess to fix!
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum -
the upgrade can fix it? Or I risk breaking it even more?
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule
Il 24/08/20 15:23, Diego Zuccato ha scritto:
> I'm now completely out of ideas :(
Actually I have one last idea. My nodes are installed from standard
Debian "stable" repos. That means they're version 3.8.8 !
I understand it's an ancient version.
What's the recommended upgrade path
.
Is there some rule of thumb for sizing nodes? I couldn't find anything...
TIA.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting
rbiters brought back online the files, so I
completely removed the abriter bricks (degrading to replica 2) and I'm
now slowly re-adding 'em to have "replica 3 arbiter 1" again (see "node
sizing" thread).
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alm
en a "Peer rejected" status due to another OOM kill. No problem, I've
been able to resolve it, but the original problem still remains.
What else can I do?
TIA!
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Ber
Il 21/08/20 13:56, Diego Zuccato ha scritto:
Hello again.
I also tried disabling bitrot (and re-enabling it afterwards) and the
procedure for recovery from split-brain[*] removing the file and its
link from one of the nodes, but no luck.
I'm now completely out of ideas :(
How can I resync
trying to
use the diamond freespace collector (w/o the initial slash, it ignores
glusterfs mountpoints).
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
__
hedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/
Yes, they're for an ancient version, but it worked...
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.
me following error message:
IIRC it's the same issue I had some time ago.
I solved it by "degrading" the volume to replica 2, then cleared the
arbiter bricks and upgraded again to replica 3 arbiter 1.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Stu
t; volume work? That sounds much less scary but I don't know if that would
> work...
I don't know, sorry.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 2
oad on both CPUs and RAM.
> That's quite long I must say and I am in the same case as you, my arbiter is
> a VM.
Give all the CPU and RAM you can. Less than 8GB RAM is asking for
troubles (in my case).
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater St
you have split-brain: no version of the
file is "better" than the other. A returning node 3 can't know (in an
automated way) which copy of the file should be replicated.
That's why you should always have a quorum of N/2+1 when data integrity
is important.
--
Diego Zuccato
DIFA - Dip. di Fi
time
needed for resync.
Remember that backing filesystems for arbiters should be tweaked for
allowing a lot of inodes. I formatted my XFS volumes with
mkfs.xfs -i size=512,maxpct=90 /dev/sdXn
to allow up to 90% for inodes (instead of the usual 5%) => a single fs
can handle multiple arbiter bricks
nough).
I tried enabling client-io-threads, but seems it didn't change anything.
Any hints?
TIA!
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
C
ast,
since it's "local"), then put it offline and send it to the final
location. Once you turn it on again it will have to sync only the latest
changes.
Sould take less than 3 weeks :)
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Universi
o on]
That will take quite a long time (IIUC I cannot move to a brick being
moved to another... or at least it doesn't seem wise :) ).
It's probably faster to first move arbiters and then the data.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum
e arbiters across
the current two data servers and a new one (currently I'm using a
dedicated VM just for the arbiters). But that's another pie :)
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna -
?
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https
ster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Cale
Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informat
mmit
> by default. (Or user programs that fork() a lot.)
Actually the fork()-intensive programs are the ones that most likely are
behaving badly... I'll have to dig deeper.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Be
it should be doable...
Probably the runfile is the best option. I'll try it.
> Another approach is to use cgroups and limit everything in the userspace.
Tried that, but have had to revert the change: SLURM is propagating
ulimits to the nodes... Going to ask in the SLURM list ...
--
Diego Zuccato
DI
uld enable or change? The 3
servers currently have only 96GB RAM (already asked to double it), and
should host up to 36 bricks + 18 quorums). There are about 50 clients.
Tks.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Bert
ume, instead of going through each brick deletion
Nono, the volume is needed and is currently hosting data I cannot
lose... But I haven't space to copy it elsewhere...
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pic
r node at a customer site:
> * 256G RAM
Glip! 256G for 4 bricks... No wonder I have had troubles running 26
bricks in 64GB RAM... :)
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.:
Il 19/03/21 13:17, Strahil Nikolov ha scritto:
> find /FUSE/mountpoint -exec stat {} \;
Running it now (redirecting stdout to /dev/null).
It's finding quite a lot of "no such file or directory" errors.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Ma
ks for the hint, but it's already set. I usually do it as soon as I
create the volume :) I don't understand why it's not the default :)
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
t
e to look to at least diangose (if not fix) this :( As I said,
probably part of the issue is due to the multiple failures for OOM and
the multiple tries to remove a brick.
I'm currently emptying the volume then I'll recreate it from scratch,
hoping for the best.
--
Diego Zuccato
DIFA - Dip. di
Il 19/03/2021 16:03, Erik Jacobson ha scritto:
A while back I was asked to make a blog or something similar to discuss
the use cases the team I work on (HPCM cluster management) at HPE.
Tks for the article.
I just miss a bit of information: how are you sizing CPU/RAM for pods?
--
Diego
en beefy enough for
> gluster. Sorry I don't have a more scientific answer.
Seems that 64GB RAM are not enough for a pod with 26 glusterfsd
instances and no other services (except sshd for management). What do
you mean by "beefy enough"? 128GB RAM or 1TB?
--
Diego Zuccato
DIFA - Dip.
ere something I can do to convince Gluster to heal those entries
w/o going entry-by-entry manually?
Thanks.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Il 19/03/21 11:06, Diego Zuccato ha scritto:
> I tried to run "gluster v heal BigVol info summary" and got quite a high
> count of entries to be healed on some bricks:
> # gluster v heal BigVol info summary|grep pending|grep -v ' 0$'
> Number of entries in heal pending:
an ls takes 3s... uhm...) so the
workload is quite different...
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule
ng oom_adj, but the PID changes at every boot...
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Every 2nd and 4
ytes of data via network: move some (carefully-chosen) bricks from
old nodes to the new one, replace 'em with empty disks and expand.
Something like MD-RAID metadata.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pi
, how?
Regards.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09
eal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
-8<--
Il 29/11/2021 12:02, Strahil Nikolov ha scritto:
What is the output of 'gluster volume heal VOLUME info summary' ?
Best Regards,
Strahil Nikolov
On Mon, Nov 29, 2021 at 10:33, Diego Zuccato
verwriting existing files.
This way the bricks didn't overflow.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
you think
thx
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:
y ;)
Best Regards,
Strahil Nikolov
On Mon, Nov 29, 2021 at 13:09, Diego Zuccato
wrote:
Here it is. Seems gluster thinks there's nothing to be done...
-8<--
root@str957-clustor00 <mailto:root@str957-clustor00>:~# gluster v
heal cluster_data info summary
Brick cl
new server
- start volume
Tks!
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at
/Bck/07 (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Does thin arbiter support just one replica of bricks?
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Se
Not there. It's not one of the defined services :(
Maybe Debian does not support it?
Il 16/02/2022 13:26, Strahil Nikolov ha scritto:
My bad, it should be /gluster-ta-volume.service/
On Wed, Feb 16, 2022 at 7:45, Diego Zuccato
wrote:
No such process is defined. Just the standard
k/07 (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Does thin arbiter support just one replica of bricks?
--
Diego Zuccato
D
y at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bo
he brick's performance by reducing the
number of "housekeeping" IOs.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
h-proc-pid-status-show-all-supplementary-groups
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Every 2nd and 4
Il 01/02/2022 20:08, Fox ha scritto:
Basically I'm asking if the bricks are mountpoint and node agnostic.Nope. They
aren't :( (unless something changed in the latest releases).
Some days ago I asked basically the same question (how to move a volume
to a new server).
--
Diego Zuccato
DIFA
r-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Com
is
quorum brick for bricks Xa and Xb.
For a 3 servers setup the layout we're using is
S1 S2 S3
0a 0b 0q
1a 1q 1b
2q 2a 2b
HIH.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39
rt test on test system in advance.
Edit: You didn't mention your FS type, so I assume XFS .
Best Regards,
Strahil Nikolov
On Mon, Jan 17, 2022 at 13:15, Diego Zuccato
wrote:
Hello all.
I have a Gluster volume that I'd need to move to a different server.
The volume is
sks , rebuild the initramfs
and thus you will change only the decommissioned hardeare.
That's more problematic... The same server serves another volume, that
must stay there.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Be
orked quite well
and file access has become really comfortable.
Best regards
Marcus
On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote:
CAUTION: This email originated from outside of the organization. Do not click
links or open attachments unless you recognize the sender and know the con
ge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 B
apt/keyrings/gluster-archive-keyring.gpg
and then add 'signed-by=/etc/apt/keyrings/gluster-archive-keyring.gpg'
between '[' and 'arch=amd64'.
HIH,
Diego
[1] https://download.gluster.org/pub/gluster/glusterfs/9/9.4/Debian/
[2] https://wiki.debian.org/DebianRepository/UseThirdParty
--
Diego Zuccato
D
ve had to create /etc/sysconfig/glusterd file
containing
LOG_LEVEL=WARNING
But that just hides the messages... Is that normal behaviour?
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
r-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
__
use ?
If latest - open a github issue.
Best Regards,
Strahil Nikolov
On Thu, Aug 11, 2022 at 10:06, Diego Zuccato
wrote:
Yup.
Seems the /etc/sysconfig/glusterd setting got finally applied and I now
have a process like this:
root 4107315 0.0 0.0 529244 40124 ?
e/amd64/apt
bullseye main
Tks,
Diego
Il 09/08/2022 22:08, Strahil Nikolov ha scritto:
Hey Diego,
can you show a sample of such Info entries ?
Best Regards,
Strahil Nikolov
On Mon, Aug 8, 2022 at 15:59, Diego Zuccato
wrote:
Hello all.
Lately, I noticed some hickups in our G
only remotely
and thus preventing the overfill of the /var .
Best Regards,
Strahil Nikolov
On Wed, Aug 10, 2022 at 7:52, Diego Zuccato
wrote:
Hi Strahil.
Sure. Luckily I didn't delete 'em all :)
From bitd.log:
-8<--
[2022-08-09 05:58:12.075999 +] I [MS
tc/default/glusterd containing "LOG_LEVEL=ERROR".
But I still see a lot of 'I' lines in the logs and have to manually run
logrotate way too often or /var gets too full.
Any hints? What did I forget?
Tks.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater St
mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community M
removed continue getting writes for new files?
Tks.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Every 2nd
(I assume "on the original
brick"), then why is that note needed?
Most probably I'm missing some vital piece of information.
[BTW my cluster.force-migration is already off... that warning is a long
standing issue that seems is not easily fixable]
--
Diego Zuccato
DIFA - Dip. di Fisica
o a mismatch between .glusterfs/ contents and normal
hierarchy. Is there some tool to speed up the cleanup?
Tks.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
_
the cost of
longer heal times in case a disk fails. Am I right or it's useless?
Other recommendations?
Servers have space for another 6 disks. Maybe those could be used for
some SSDs to speed up access?
TIA.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater St
u check the ppid for 2-3 randomly picked ?
ps -o ppid=
Best Regards,
Strahil Nikolov
On Wed, Mar 15, 2023 at 9:54, Diego Zuccato
wrote:
I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Di
Il 16/03/2023 12:37, Strahil Nikolov ha scritto:
Can you restart glusterd service (first check that it was not modified
to kill the bricks)?
Best Regards,
Strahil Nikolov
On Thu, Mar 16, 2023 at 8:26, Diego Zuccato
wrote:
OOM is just just a matter of time.
Today mem use is
egards,
> Strahil Nikolov
>
> On Thu, Mar 16, 2023 at 15:29, Diego Zuccato
> mailto:diego.zucc...@unibo.it>> wrote:
> In Debian stopping glusterd does not stop brick processes: to stop
> everything (and free the memory) I have to
>
, 2023 at 8:07, Diego Zuccato
wrote:
In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
volume file [{from server}, {errno=2}, {error=File o directory non
there some way to selectively run glfsheal to fix one brick at a time?
Diego
Il 21/03/2023 01:21, Strahil Nikolov ha scritto:
Theoretically it might help.
If possible, try to resolve any pending heals.
Best Regards,
Strahil Nikolov
On Thu, Mar 16, 2023 at 15:29, Diego Zuccato
wrote:
ing on clustor02 since
yesterday, still no output). Shouldn't be just one per brick?
Diego
Il 15/03/2023 08:30, Strahil Nikolov ha scritto:
Do you use brick multiplexing ?
Best Regards,
Strahil Nikolov
On Tue, Mar 14, 2023 at 16:44, Diego Zuccato
wrote:
Hello all.
Our Glu
-multiplex).
Diego
Il 24/03/2023 19:21, Strahil Nikolov ha scritto:
Try finding if any of them is missing on one of the systems.
Best Regards,
Strahil Nikolov
On Fri, Mar 24, 2023 at 15:59, Diego Zuccato
wrote:
There are 285 files in /var/lib/glusterd/vols/cluster_data ...
including
0TB of data on disks.
I'd ditch local RAIDs to double the space available. Unless you
desperately need the extra read performance.
Options Reconfigured:I'll have a look at the options you use. Maybe something can be useful
in our case. Tks :)
--
Diego Zuccato
DIFA - Dip. di Fisica e Astrono
d 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Unive
that there is a vol file mismatch (maybe
/var/lib/glusterd/vols//*-shd.vol).
Best Regards,
Strahil Nikolov
On Fri, Feb 3, 2023 at 12:20, Diego Zuccato
wrote:
Can't see anything relevant in glfsheal log, just messages related to
the crash of one of the nodes (the one that had the mobo replaced
ou compare all vol files in /var/lib/glusterd/vols/ between the
nodes ?
> I have the suspicioun that there is a vol file mismatch (maybe
> /var/lib/glusterd/vols//*-shd.vol).
>
> Best Regards,
> Strahil Nikolov
>
> On Fri, Feb 3, 2023 at 12:20, Diego Zuccato
>
away from the brick and check in FUSE. If it's fine ,
touch it and the FUSE client will "heal" it.
Best Regards,
Strahil Nikolov
On Tue, Feb 7, 2023 at 16:33, Diego Zuccato
wrote:
The contents do not match exactly, but the only difference is the
"option
14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
,
Strahil Nikolov
On Thu, Jan 18, 2024 at 13:08, Diego Zuccato
wrote:
That's the same kind of errors I keep seeing on my 2 clusters,
regenerated some months ago. Seems a pseudo-split-brain that should be
impossible on a replica 3 cluster but keeps happening.
Sadly going
ooperation.
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip.
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Eve
om experiments comes in bursts, with (often large)
intervals when you can process/archive it.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
eplaceable it's better to have it elsewhere. Better if offline.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
Community Meeting Calendar:
Schedule -
Ever
' and 'gluster volume status' are ok,
so kinda looks like "pseudo"...
hubert
Am Do., 18. Jan. 2024 um 08:28 Uhr schrieb Diego Zuccato
:
That's the same kind of errors I keep seeing on my 2 clusters,
regenerated some months ago. Seems a pseudo-split-brain that should be
impossible on a
users
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisic
ar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mat
* Securities Pvt. Ltd.*
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. d
nce on the contents
of this information is strictly prohibited. Thanks for your cooperation.
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https:/
hard quota exceeded", IMO.
That's on Gluster 9.6.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
93 matches
Mail list logo