Hi Pat,
I'm assuming you are using gluster native (fuse mount). If it helps, you
could try mounting it via gluster NFS (gnfs) and then see if there is an
improvement in speed. Fuse mounts are slower than gnfs mounts but you
get the benefit of avoiding a single point of failure. Unlike fuse
On 04/07/2017 11:34 PM, Jamie Lawrence wrote:
Greetings, Glusterites -
I have a suboptimal situation, and am wondering if there is any way to create a
replica-3 distributed/replicated volume with three machines. I saw in the docs
that the create command will fail with multiple bricks on the
On Fri, Apr 7, 2017 at 11:24 PM, Tahereh Fattahi
wrote:
> Hi
> I have 3 gluster servers and one volume. Know, I want to change the source
> code of serves and install again.
> Is it enough to stop and delete the volume, make and install the code? Or
> before installation I
On Sat, Apr 8, 2017 at 12:02 AM, Alvin Starr wrote:
> Thanks for the help.
>
> That seems to have fixed it.
>
> We were seeing hangs clocking up at a rate of a few hundred a day and for
> the last week there have been none.
>
>
>
Thanks for confirming this. Good to know one of
Hi,
We noticed a dramatic slowness when writing to a gluster disk when
compared to writing to an NFS disk. Specifically when using dd (data
duplicator) to write a 4.3 GB file of zeros:
* on NFS disk (/home): 9.5 Gb/s
* on gluster disk (/gdata): 508 Mb/s
The gluser disk is 2 bricks joined
Thanks for the help.
That seems to have fixed it.
We were seeing hangs clocking up at a rate of a few hundred a day and
for the last week there have been none.
On 03/31/2017 05:54 AM, Mohit Agrawal wrote:
Hi,
As you have mentioned client/server version in thread it shows package
version
Greetings, Glusterites -
I have a suboptimal situation, and am wondering if there is any way to create a
replica-3 distributed/replicated volume with three machines. I saw in the docs
that the create command will fail with multiple bricks on the same peer; is
there a way around that/some other
Hi
I have 3 gluster servers and one volume. Know, I want to change the source
code of serves and install again.
Is it enough to stop and delete the volume, make and install the code? Or
before installation I should delete some files or folders?
___
Volume type:
Disperse Volume 8+2 = 1080 bricks
First time added 8+2 * 3 sets and it started giving issue in listing
folder. so, remounted mount point and it was working fine.
Second added 8+2 *13 sets and it also had the same issue.
when listing folder it was returning an empty folder or not
Means if old data is present in brick and volume is not present then it
should be visible in our brick dir /opt/lvmdir/c2/brick?
On Fri, Apr 7, 2017 at 3:04 PM, Ashish Pandey wrote:
>
> If you are creating a fresh volume, then it is your responsibility to have
> clean
If you are creating a fresh volume, then it is your responsibility to have
clean bricks.
I don't think gluster will give you a guarantee that it will be cleaned up.
So you have to investigate if you have any previous data on that brick.
What I meant was that you have to find out the number
Hi Ashish,
I don't think so that count of files on mount point and .glusterfs/ will
remain same. Because I have created one file on the gluster mount poing but
on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
creates .glusterfs/xx/xx/x... which is two parent dir and
HI Ashish,
Even if there is a old data then it should be clear by gluster it self
right? or you want to do it manually?
Regards,
Abhishek
On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey wrote:
>
> Are you sure that the bricks which you used for this volume was not having
>
Is there any update ??
On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL
wrote:
> Hi,
>
> We are currently experiencing a serious issue w.r.t volume space usage by
> glusterfs.
>
> In the below outputs, we can see that the size of the real data in /c
> (glusterfs
14 matches
Mail list logo