[ Received no feedback, i resend it... ]
A little strange things, but i'm hitting my head on the wall...
I needed to 'enlarge' my main filesystem (XFS backed-up), that contain my
main samba share and a brick for a GFS share; i've setup a new volume (for
the VM), formatted XFS, move all the
A little strange things, but i'm hitting my head on the wall...
I needed to 'enlarge' my main filesystem (XFS backed-up), that contain my
main samba share and a brick for a GFS share; i've setup a new volume (for
the VM), formatted XFS, move all the file taking care to umount and stop GFS
(so,
Hi Michael
unfortunately not -- everything I tried brings me to the same situation
where the peer is rejected.
I have also replicated the issue in another cluster where I can experiment
without issues, but not sure what I should try next...
Regards,
Marco
On Mon, 27 Dec 2021 at 13:04, Michael
of the
/var/lib/glusterd directory, probe one of the other nodes and restart the
affected node.
Regards,
Marco
On Fri, 19 Nov 2021 at 05:26, Nikhil Ladha wrote:
> Hi Marco
>
> The checksum difference refers to the difference in the contents of the
> `var/lib/glusterd` directory. M
d to clean up /var/lib/glusterd without success.
Am I missing something?
Downgrading again to 9.3 works again. All my volumes are
distributrd_replicate and the cluster is composed by 3 nodes.
Thanks,
Marco
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 I
Srijan
no problem at all -- thanks for your help. If you need any additional
information please let me know.
Regards,
Marco
On Thu, 27 May 2021 at 18:39, Srijan Sivakumar wrote:
> Hi Marco,
>
> Thank you for opening the issue. I'll check the log contents and get back
> to you
Srijan
thanks a million -- I have opened the issue as requested here:
https://github.com/gluster/glusterfs/issues/2492
I have attached the glusterd.log and glustershd.log files, but please let
me know if there is any other test I should do or logs I should provide.
Thanks,
Marco
On Wed, 26
Ravi,
thanks a million.
@Mohit, @Srijan please let me know if you need any additional information.
Thanks,
Marco
On Tue, 25 May 2021 at 17:28, Ravishankar N wrote:
> Hi Marco,
> I haven't had any luck yet. Adding Mohit and Srijan who work in glusterd
> in case they have some inputs
Hi Ravi
just wondering if you have any further thoughts on this -- unfortunately it
is something still very much affecting us at the moment.
I am trying to understand how to troubleshoot it further but haven't been
able to make much progress...
Thanks,
Marco
On Thu, 20 May 2021 at 19:04, Marco
: 114018]
[client.c:2229:client_rpc_notify] 0-VM_Storage_1-client-11: disconnected
from client, process will keep trying to connect glusterd until brick's
port is available [{conn-name=VM_Storage_1-client-11}]
On Thu, 20 May 2021 at 18:54, Marco Fais wrote:
> HI Ravi,
>
> thanks again
s_sent = 0
The other bricks look connected...
Regards,
Marco
On Thu, 20 May 2021 at 14:02, Ravishankar N wrote:
> Hi Marco,
>
> On Wed, May 19, 2021 at 8:02 PM Marco Fais wrote:
>
>> Hi Ravi,
>>
>> thanks a million for your reply.
>>
>> I have rep
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
nfs.disable: on
transport.address-family: inet
I have also tried to set cluster.server-quorum-type: none but with no
difference.
Thanks
Marco
On Wed, 19 May 2021
ways in the same (remote) node... i.e. they are both
in node 3 in this case. That seems to be always the case, I have not
encountered a scenario where bricks from different nodes are reporting this
issue (at least for the same volume).
Please let me know if you need any additional info.
Regards,
Ma
-family: inet
cluster.self-heal-daemon: enable
Regards,
Marco
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
thank you all,
just tryed set performance.nl-cache on and worked out
Il 11/04/2021 10.29, Amar Tumballi ha
scritto:
Hi Marco, this is really good test/info. Thanks.
One more thing to observe is you are running
or not
thank you
Il 05/10/2020 19.45, Marco Lerda - FOREACH S.R.L. ha scritto:
hi,
we use glusterfs on a php application that have many small php files
images etc...
We use glusterfs in replication mode.
We have 2 nodes connected in fiber with 100MBps and less than 1 ms
latency.
We have also
hi,
we use glusterfs on a php application that have many small php files
images etc...
We use glusterfs in replication mode.
We have 2 nodes connected in fiber with 100MBps and less than 1 ms latency.
We have also an arbiter on slower network (but the issue is there also
without the arbiter).
d to get shared "write" lock
I have looked also on the bricks logs, but there is too much information
there and will need to know what to look for.
Not sure if there is any benefit in looking into this any further?
Thanks,
Marco
On Thu, 2 Jul 2020 at 15:45, Strahil Nikolov wrote:
&g
engine logs (on the HostedEngine VM or your standalone
> engine) , vdsm logs on the host that was running the VM and next - check
> the brick logs.
>
Will do.
Thanks,
Marco
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://blu
ance.quick-read: off
storage.fips-mode-rchecksum: on
nfs.disable: on
Unfortunately we have the issue with all VMs -- doesn't seem to depend on
the allocation of storage either (thin provisioned or pre allocated).
Thanks!
Marco
On Tue, 30 Jun 2020 at 05:12, Strahil Nikolov wrote:
> Hey Marco,
&g
three different clusters (same
versions).
Any suggestions?
Thanks,
Marco
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
e replicas of the
involved shard. Found identical digests.
- tried to delete the shard from a replica set, one at a time, along
with its hard link. Shard is always rebuilt correctly but error from
client persists.
Regards,
--
Marco Crociani
Il 22/11/18 13:19, Marco Lorenzo Crociani ha scritto:
Hi
/clone, manage some snapshots of these vms. The
errors on the low level are "stale file handle".
Volume is distribute 2 replicate 3 with sharding.
Should I open a bug also on oVirt?
Gluster 3.12.15-1.el7
oVirt 4.2.6.4-1.el7
Regards,
--
Marc
On 09/04/2018 21:36, Shyam Ranganathan wrote:
On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
On 06/04/2018 19:33, Shyam Ranganathan wrote:
Hi,
We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move
, it's this: https://review.gluster.org/19730
https://bugzilla.redhat.com/show_bug.cgi?id=1442983
Regards,
--
Marco Crociani
Thanks,
Shyam
On 04/06/2018 11:47 AM, Marco Lorenzo Crociani wrote:
Hi,
are there any news for 3.10.12 release?
Regards
Hi,
are there any news for 3.10.12 release?
Regards,
--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On 16/03/2018 13:24, Atin Mukherjee wrote:
Have sent a backport request https://review.gluster.org/19730 at
release-3.10 branch. Hopefully this fix will be picked up in next update.
Ok thanks!
--
Marco Crociani
Prisma Telecom Testing S.r.l
?
Regards,
--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Hi,
have you found a solution?
Regards,
--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
user0m0.000s
sys0m0.003s
Should I reduce swappiness? Now it is 60.
Is it really needed all that ram to mount twelve glusterfs volumes (
~3764 GB )?
Regards,
--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4 20127 MILANO ITALY
Phone: +39 02 26113507
Fax: +39 02
0 0
Is there any memory leak or something nasty?
Regards,
--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4 20127 MILANO ITALY
Phone: +39 02 26113507
Fax: +39 02 26113597
e-mail: mar...@prismatelecomtesting.com
web: http://www.prismatelecomtesting.com
Questa email (e I suoi
Hi Soumya, Kaleb
I solved installing cman
Kind regards
Marco
Il 10/12/15 06:43, Soumya Koduri ha scritto:
On 12/10/2015 02:51 AM, Marco Antonio Carcano wrote:
Hi Kaleb,
thank you very much for the quick reply
I tried what you suggested, but I got the same error
I tried both
review complete, we will make it available in 3.7.7 release)
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1289935
Patch: http://review.gluster.org/#/c/12923/
Thanks for reporting the issue.
regards
Aravinda
On 12/09/2015 03:35 PM, Marco Lorenzo Crociani wrote:
Hi,
# /var/lib/glusterd/hooks/1
s/1/delete/post/S57glusterfind-delete-post.py",
line 43, in main
for session in os.listdir(glusterfind_dir):
OSError: [Errno 2] No such file or directory:
'/var/lib/glusterd/glusterfind'
# which glusterfind
/usr/bin/glusterfind
Regards,
Marco Crociani
On 07/12/2015 14:45, Aravinda wrote
ME="ganesha-ha-360"
HA_VOL_SERVER="glstr01"
HA_CLUSTER_NODES="glstr01v,glstr02v"
VIP_glstr01v="192.168.65.250"
VIP_glstr02v="192.168.65.251"
but still no luck - Maybe do you have any other advice?
Kind regards
Marco
Il 08/12/15 13:30, Kaleb KEITHLEY
nning on this node
Error: cluster is not currently running on this node
Kind regards
Marco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
lusterd/hooks/1/delete/post/S57glusterfind-delete-post.py
--volname=VOL_ZIMBRA
--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4 20127 MILANO ITALY
Phone: +39 02 26113507
Fax: +39 02 26113597
e-mail: mar...@prismatelecomtesting.com
web: http://www.prismatelecomtesting.com
Ques
Any news?
glusterfs version is 3.7.5
On 30/10/2015 17:51, Marco Lorenzo Crociani wrote:
Hi Susant,
here the stats:
[root@s20 brick1]# stat .* *
File: `.'
Size: 78Blocks: 0 IO Block: 4096 directory
Device: 811h/2065dInode: 2481712637 Links: 7
Access: (0755/drwxr-xr
: 2015-10-26 15:31:30.520105089 +0100
Modify: 2012-11-01 10:49:22.0 +0100
Change: 2015-10-26 15:31:30.521105091 +0100
Thanks,
Marco
On 30/10/2015 08:22, Susant Palai wrote:
Hi Marco,
Can you send the stat of the files from the removed-brick?
Susant
- Original Message
: failed: /gluster/VOL/brick1 is already part of a volume
If I use the force option I get the old files or they get erased?
Should I rsync from the "unmounted" brick to the "mounted" volume or to
an "umounted" brick that is part of the volume?
Regards,
--
Marco Crocian
Hello.
Tnx for the suggestion. I get an error message saying that "volume set:
failed: option : cluster.consistent-metadata does not exist".
-> I will upgrade next week my servers with the last version and then
let you know.
Marco
Il 07. 10. 15 03:10, Krutika Dhananjay ha scritto:
.
Any hypotesis?
Marco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Marco
Messaggio Inoltrato
Oggetto:[Gluster-users] Waiting for glusterfs up and running
Data: Tue, 22 Sep 2015 09:51:34 +0200
Mittente: Marco <marco.brign...@marcobaldo.ch>
A: gluster-users@gluster.org
Hello.
I have a system with several VMs, tw
May I ask if there is any way to solve this issue with the automounter
or if I have to create a script that is waiting the volumes (and which
is the recommended way?)?
Tnx and have a nice day
Marco
___
Gluster-users mailing list
Gluster-users@gluster
) with a
mount -t glusterfs.
Does anyone else have noticed a such issue with the bitcoin core and/or
has and ideas on what I could verify/monitor to solve/understand this
problem?
Tnx in advance for help.
Marco
___
Gluster-users mailing list
Gluster-users
are 64 bit KVM VMs, gluster3 a 32
bit physical machine deemed to be a backup server (all are up-to-date
OpenSuSE 13.2).
May I ask where I will find information to undertand why the
geo-replication worker fails, enters the faulty state and then it is
switched to xsync?
Tnx
Marco
Il 26. 05. 15 08:21
googling but I could not find any
input.
Tnx in advance and have a nice day
Marco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi. I'm using glusterfs 3.4 in replica 2 configuration. My main usage of
gluster is for openstack storage backend (glance and shared storage for
nova-compute) and i'm using gluster and i'm using gluster for about 1 year
(not really a production environment, so a limited set of experiments is
error in logs)
2) Now i think i can re-start volume, so i should see an automatic healing
procedure
3) All datas are replicated on server1
Can i have a confirmation about this procedure? Other volumes are affected?
Please, i cannot loose my data
Thanks
2014-10-03 20:07 GMT+02:00 Marco Marino marino
Hi,
I'm trying to use glusterfs with my openstack private cloud for storing
ephemeral disks. In this way, each compute node mount glusterfs in /nova
and save instances on a remote glusterfs (shared between the compute nodes,
so live migration is very fast).
I have 2 storage node (storage1 and
: Distribute
Volume ID: 83c9d6f3-0288-4358-9fdc-b1d062cc8fca
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 12.12.123.54:/path/gluster/36779974/teoswitch_default_storage
Brick2: 12.12.123.55:/path/gluster/36779974/teoswitch_default_storage
Any ideas?
Marco Zanger
Phone 54 11
-teoswitch_custom_music-client-1: connection to SERVER_B_IP:24010 failed
(Connection timed out)
Any ideas? From the logs I see nothing but confirm the fact that A cannot reach
B which makes sense since B is down. But A is not, and it's volume should still
be accesible. Right?
Regards,
Marco
Marco
,max_read=131072)
I've used both glusterfs and nfs for my tests, but when server B is down
(unreachable from A) we cannot access (nor read or write) the volumes within A.
Is this related to some configuration?
[Description: hexacta]
Marco Zanger
Phone 54 11 5299-5400 (int. 5501)
Clay 2954
force gluster to use ports from 24009 all the time and
use the rest sequentially?
Regards,
Marco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
in 2787)
STAT REPLY (Call in 2786)
What are they? Is this right? Does this traffic expected or is there a way to
reduce this amount of traffic?
Regards,
Marco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman
Hello,
We've 2 replicated Gluster nodes. And we're getting the following error log.
We cannot mount or access our Gluster system at this moment.
We're getting this error over and over again since sunday night:
[2013-03-24 21:43:00.830443] W [client3_1-fops.c:1781:client3_1_fxattrop_cbk]
Hello,
Sometimes when i'm editing a file with located in Gluster with VI i get
this message:
W12: Warning: File test.sh has changed and the buffer was changed
in Vim as well
See :help W12 for more info.
[O]K, (L)oad File:
I've no idea what can be causing that and i'm scared.
Thank you.
You have to cc the list or nobody else will see this.
On Mon, 2013-02-04 at 17:42 +0100, Guillermo Marco wrote:
Hello,
I'm the only user on the system at this moment so it's not possible.
Does Gluster modifies any part of the file indexing or something?
On 02/04/2013 05:33 PM, James wrote
Hello,
Yes i've my gluster system on /mnt/gluster/
If i'm editing a file with vi like this: $vi /mnt/gluster/myfile.sh
Sometimes i get the error mentioned below.
On 02/04/2013 05:57 PM, Joe Julian wrote:
Are you editing files directly on the brick?
Guillermo Marco guillermo.ma
Hello, I'm compiling glusterfs for a debian squeeze.
When I do a make command, I see These parameter:
GlusterFS configure summary
===
FUSE client: yes
Infiniband verbs: yes
epoll IO multiplex: yes
argp-standalone: no
fusermount: no
readline: no
georeplication: yes
I
2012/6/29 Brian Candler b.cand...@pobox.com:
Do you have the relevant -dev packages installed? e.g. libreadline-dev,
libfuse-dev?
Thank you for your reply.
I have installed libreadline-dev and libfuse-dev and now these is the
result of make command
GlusterFS configure summary
2012/6/29 Marco Agostini comunelev...@gmail.com:
How can I resolve these ?
argp-standalone : no
fusermount : no
I've execute these
./configure --enable-fusermount
and these is the result:
GlusterFS configure summary
===
FUSE client: yes
Infiniband
2012/6/29 Brian Candler b.cand...@pobox.com:
You just read configure.ac
dnl Check for argp
AC_CHECK_HEADER([argp.h], AC_DEFINE(HAVE_ARGP, 1, [have argp]))
AC_CONFIG_SUBDIRS(argp-standalone)
BUILD_ARGP_STANDALONE=no
if test x${ac_cv_header_argp_h} = xno; then
Use cache=writeback
-- Messaggio inoltrato --
Da: Marco Agostini comunelev...@gmail.com
Data: 23/giu/2012 22:53
Oggetto: Re: [Gluster-users] Can't run KVM Virtual Machines on a Gluster
volume
A: Fernando Frediani (Qube) fernando.fredi...@qubenet.net
Il giorno 23/giu/2012 22:16
2011/7/13 Christopher Anderlik christopher.ander...@xidras.com:
hi list,
we use glusterfs 3.2.1 with this configuration (one server - one client):
Volume Name: office-data
Type: Replicate
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1:
2011/7/9 gluster1...@akxnet.de:
How can I prevent gluster from accessing these ports?
post some information too:
- gluster version ?
- client used to mount gluster (native client, nfs ?)
___
Gluster-users mailing list
Gluster-users@gluster.org
Hy, when a mount command fail (for some reason, server unavailable,
volume not exists) the system don't show any error message.
I know that I can find the error on client-side in /var/log/glusterfs/*.log
I'm using glusterfs 3.2.1 and glusterd 3.2.1
These is the command that I'm using on
2011/7/11 Darren Austin darren-li...@widgit.com:
I had noticed the same thing, and intended to post a note about it too :)
The way I worked around this issue is to use the mountpoint command
directly after the mount.
Eg:
mount -t glusterfs server:volume /mnt
mountpoint /mnt
echo $?
.
--
You can verify that directly from your client log... you can read that:
[2011-06-28 13:28:17.484646] I
[client-lk.c:617:decrement_reopen_fd_count] 0-data-volume-client-0:
last fd open'd/lock-self-heal'd - notifying CHILD-UP
Marco
2011/6/29 Darren Austin darren-li...@widgit.com:
There is no sync between the two servers in the situation I outlined, and the
client cannot trigger a self-heal as you suggest because the client is
effectively dead in the water until it's forcibly killed and re-mounted.
some information:
-
2011/6/29 Marco Agostini comunelev...@gmail.com:
- all GlusterFS version (server and client) are the same ? (glusterfs -v)
from your mnt.log I've seen that you are using GlusterFS-3.1.0 on your server.
Actualy I'm using Gluster 3.2.1 and works very fine.
I'm making test similar to your test
Hi, I have read the documentation on www.gluster.com but I don't
understand the difference from some volume type.
I know that Replicated is similar to a RAID1 on 2 server, but I don't
understand how work Distributed and Striped volume.
Someone can help me to understand ?
thank you very much.
72 matches
Mail list logo