Hello,
On one of my GlusterFS 3.12.7 3-way replica volume I can't stop it using the
standard gluster volume stop command as you can see below:
$ sudo gluster volume stop myvolume
Stopping volume will make its data inaccessible. Do you want to continue? (y/n)
y
volume stop: myvolume: failed:
Hi,
We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move this to the
30th of Apr instead.
Is there a specific fix that you are looking for in the release?
Thanks,
Shyam
On 04/06/2018 11:47 AM, Marco Lorenzo Crociani
Hi,
are there any news for 3.10.12 release?
Regards,
--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Raghavendra,
Thanks! I'll get you this info within the next few days and will file a
bug report at the same time.
For what its worth, we were able to reproduce the issue on a completely
new cluster running 3.13. The IO pattern that most easily causes it to
fail is a VM image format with
06.04.2018 14:46, Prashanth Pai пишет:
Hi Dmitry,
How many nodes does the cluster have ?
If the quorum is lost (majority of nodes are down), additional
recovery steps are necessary to bring it back up:
https://github.com/gluster/glusterd2/wiki/Recovery
Hello!
just 2 nodes and looks like
Hi Dmitry,
How many nodes does the cluster have ?
If the quorum is lost (majority of nodes are down), additional recovery
steps are necessary to bring it back up:
https://github.com/gluster/glusterd2/wiki/Recovery
On Wed, Apr 4, 2018 at 11:08 AM, Dmitry Melekhov wrote:
>
Hi Jose,
By switching into pure distribute volume you will lose availability if
something goes bad.
I am guessing you have a nX2 volume.
If you want to preserve one copy of the data in all the distributes, you
can do that by decreasing the replica count in the remove-brick operation.
If you have
I restarted rsync, and this has been sitting there for almost a minute,
barely moved several bytes in that time:
2014/11/545b06baa3d98/com.google.android.apps.inputmethod.zhuyin-2.1.0.79226761-armeabi-v7a-175-minAPI14.apk
6,389,760 45% 18.76kB/s0:06:50
I straced each of the 3
Hi again,
I'd like to expand on the performance issues and plead for help. Here's one
case which shows these odd hiccups: https://i.imgur.com/CXBPjTK.gifv.
In this GIF where I switch back and forth between copy operations on 2
servers, I'm copying a 10GB dir full of .apk and image files.
On
You can use posix-locks i.e. fnctl based advisory locks on glusterfs just
like any other fs.
On Wed, Apr 4, 2018 at 8:30 AM, Lei Gong wrote:
> Hello there,
>
>
>
> I want to know if there is a feature allow user to add lock on a file when
> their app is modifying that file, so
Hi,
I'm trying to squeeze performance out of gluster on 4 80GB RAM 20-CPU
machines where Gluster runs on attached block storage (Linode) in (4
replicate bricks), and so far everything I tried results in sub-optimal
performance.
There are many files - mostly images, several million - and many
11 matches
Mail list logo