Hello,
Is there any way to downgrade a GlusterFS cluster? Given the
performance issues that I have seen with GlusterFS 6 and 7 (reported
elsewhere on this mailing-list), I am now considering downgrading back
to GlusterFS 3.13.
I have set up a test cluster, copied some files on it, and tried to
Dear Rafi, all,
please find attached two profile files; both are profiling the same command:
```
time rsync -a $SRC root@172.23.187.207:/glusterfs
```
In both cases, the target is a Ubuntu 16.04 VM mounting a pure
distributed GlusterFS 7 filesystem on `/glusterfs`. The GlusterFS 7
cluster is
> > Is it possible for you to repeat the test by disabling ctime or increasing
> > the inode size to a higher value say 1024?
>
> Sure! How do I disable ctime or increase the inode size?
Would this suffice to disable `ctime`?
sudo gluster volume set glusterfs ctime off
Can it be done on a
Hello Rafi,
many thanks for looking into this!
> Is it possible for you to repeat the test by disabling ctime or increasing
> the inode size to a higher value say 1024?
Sure! How do I disable ctime or increase the inode size?
Ciao,
R
Community Meeting Calendar:
APAC Schedule -
Hello Strahil,
> You can set your mounts with 'noatime,nodiratime' options for better
> performance.
Thanks for the suggestion! I'll try that eventually, but I don't
think `noatime` will make much difference on write-mostly workload.
Thanks,
R
Community Meeting Calendar:
APAC
Hello Amar,
> Can you please check the profile info [1] ? That may give some hints.
I am attaching the output of `sudo gluster volume profile info` as a text file
to preserve formatting. This covers the time from Friday night to
Monday morning;
during this time the cluster has been the target
Dear Strahil,
> Have you noticed if slowness is only when accessing the files from
> specific node ?
I am copying a largest of image files into the GlusterFS volume --
slowness is on the aggregated performance (e.g., it takes ~300 minutes
to copy 376GB worth of files). Given the high
Dear Amar,
> Can you please check the profile info [1] ? That may give some hints.
I have started profiling, will check what info has been collected on Monday.
Many thanks for the suggestion!
Riccardo
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM
Hello Strahil
> What options do you use i your cluster?
I'm not sure what exact info you would like to see?
Here's how clients mount the GlusterFS volume:
```
$ fgrep gluster /proc/mounts
tp-glusterfs5:/glusterfs /net/glusterfs fuse.glusterfs
Hello all,
I have done some further testing and found out that I get the bad
performance with a freshly-installed cluster running 6.6. Also the
performance drop is there with plain `rsync` into the GlusterFS
mountpoint, so SAMBA plays no role in it. In other words, for my
installations,
> In previous discussions it was confirmed by others that v5.5 is a little bit
> slower than v3.12 , but I think that most of those issues were fixed in v6 .
> What was the exact version you have?
6.5 according to the package version; op-version is 6.
Thanks,
Riccardo
Community
Hello Anoop,
many thanks for your fast reply! My comments inline below:
> > [1]: I have tried both the config where SAMBA 4.8 is using the
> > vfs_glusterfs.so backend, and the one where `smbd` is just writing to
> > a locally-mounted directory. Doesn't seem to make a difference.
>
> Samba
Hello,
I recently upgraded[2] our servers from GlusterFS 3.8 (old GlusterFS
repo for Ubuntu 16.04) to 6.0 (gotten from the GlusterFS PPA for
Ubuntu 16.04 "xenial").
The sustained write performance nearly dropped to half it was before.
We copy a large (a few 10'000s) number of image files (each 2
Thanks all for the help! The cluster has been up for a few hours now
with no reported errors, so I guess replacement of the server went
ultimately fine ;-)
Ciao,
R
___
Gluster-users mailing list
Gluster-users@gluster.org
Hello Atin,
> Check cluster.op-version, peer status, volume status output. If they are all
> fine you’re good.
Both `op-version` and `peer status` look fine:
```
# gluster volume get all cluster.max-op-version
Option Value
--
I managed to put the reinstalled server back into connected state with
this procedure:
1. Run `for other_server in ...; do gluster peer probe $other_server;
done` on the reinstalled server
2. Now all the peers on the reinstalled server show up as "Accepted
Peer Request", which I fixed with the
Hello,
a couple days ago, the OS disk of one of the server of a local GlusterFS
cluster suffered a bad crash, and I had to reinstall everything from
scratch.
However, when I restart the GlusterFS service on the server that has
been reinstalled, I see that it sends back a "RJT" response to other
Hello,
following the announcement of GlusterFS 6, I tried to install the
package from the Ubuntu PPA on a 16.04 "xenial" machine, only to find
out that GlusterFS 6 is only packaged for Ubuntu "bionic" and up.
Is there an online page with a table or matrix detailing what versions
are packaged for
...and here's the statedump of the client, snapd and brick snap
processes from the 4.1 test cluster.
(File names are as outputted from the GlusterD processes, so it looks
like the `snapd` daemon
has an off-by-one error in the statedump.)
Thanks,
Riccardo
glusterdump.1726.dump.1532446976.gz
Hello,
I have set up a test cluster with GlusterFS 4.1 and Ubuntu 16.04.5 and
I get the same behavior:
`ls .snaps/test/` hangs indefinitely in a getdents() system call. I
can mount and list the snapshot
just fine with `mount -t glusterfs`, it's just the USS feature that is
not working.
Is this
Re-sending the log files as attachments, to avoid MUAs corrupting them...
Ciao,
R
### client logs:
[2018-07-19 13:30:10.206657] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2018-07-19 13:30:10.316710] I [MSGID: 114007]
[client.c:2402:client_check_remote_host]
Hello Rafi,
many thanks for your prompt reply.
I have now tried to do:
ls /data/opt/.snaps/test_*/
which hung and I interrupted it with Ctrl+C a few moments later.
I attach below the DEBUG-level logs of 1 brick and the client.
When should I take the statedump and how do I send them?
Hello Rafi,
mounting as a regular volume works fine:
ubuntu@slurm-master-001:/var/log/glusterfs$ sudo mount -t glusterfs
glusterfs-server-001:/snaps/test_GMT-2018.07.18-10.02.05/glusterfs
/mnt
ubuntu@slurm-master-001:/var/log/glusterfs$ ls /mnt/
active filesystem homes jobdaemon opt share
Hello,
I am trying the USS snapshots on an existing cluster (GlusterFS 3.12.9
on Ubuntu 16.04, installed from the DEB packages on GlusterFS.org).
I can successfully create a snapshot, and it is correctly listed under
the volume's `.snaps` directory everywhere; for example:
$ stat
ntain the fix already? Or is the fix Samba-side?
[1] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
Thanks,
Riccardo
--
Riccardo Murri / Email: riccardo.mu...@gmail.com / Tel.: +41 77 458 98 32
___
Gluster-users mailing list
Gluster
Hi John,
thanks for your remark. However:
2017-12-05 16:47 GMT+01:00 Jim Kinney :
> Keep in mind a local disk is 3,6,12 Gbps but a network connection is
> typically 1Gbps. A local disk quad in raid 10 will outperform a 10G ethernet
> (especially using SAS drives).
Well, in
Hello,
I'm trying to set up a SAMBA server serving a GlusterFS volume.
Everything works fine if I locally mount the GlusterFS volume (`mount
-t glusterfs ...`) and then serve the mounted FS through SAMBA, but
the performance is slower by a 2x/3x compared to a SAMBA server with a
local ext4
o your questions is inline below.
> On 11/17/2016 07:22 PM, Riccardo Murri wrote:
> > Hello,
> >
> > we are trying out GlusterFS as the working filesystem for a compute
> > cluster;
> > the cluster is comprised of 57 compute nodes (55 cores each), acting as
&g
Hello,
Micha Ober ha scritto:
> are you using the 3.7 branch since it was released or did you use
> another version before?
The cluster was installed with 3.7, it didn't exist in 3.4-times.
(Actually, it's a short-lived cluster of VMs running on top of OpenStack.)
> I don't
On 2016-12-08, Micha Ober wrote:
> There have been no reports from other users in *this* thread until now,
> but I have found at least one user with a very simiar problem in an
> older thread:
I have posted an issue with similar symptoms on Nov. 17 under the title
"files disappearing and
/dpkg/status
3.7.6-1ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/universe amd64
Packages
500 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
--
Riccardo Murri, Anna-Heer-Strasse 10, CH-8057 Zürich, Switzerland
31 matches
Mail list logo