On 12/10/2015 07:15 PM, Ankireddypalle Reddy wrote:
Hi,
Please let me know in case you need any more details. Even for only write
operations fuse seems to outperform libgfapi. Is it because of disperse
volumes?. Also I noticed a lot of data loss in case I use libgfapi asyn I/O for
On 12/11/2015 08:30 AM, Tim wrote:
Hey List,
re:
https://www.gluster.org/pipermail/gluster-users/2015-August/023030.html
Is this possible yet?
This will make it to glusterfs-3.8.
-Ravi
As I just tried replacing an arbiter volume using replace-brick and it
seems that it's now using the
On 12/10/2015 03:31 AM, Ankireddypalle Reddy wrote:
Hi,
I upgraded my setup to gluster 3.7.3. I tested writes by performing
writes through fuse and through libgfapi. Attached are the profiles generated
from fuse and libgfapi. The test programs essentially writes 1 blocks each
Hey List,
re: https://www.gluster.org/pipermail/gluster-users/2015-August/023030.html
Is this possible yet?
As I just tried replacing an arbiter volume using replace-brick and it
seems that it's now using the same amount of space as the other bricks.
Cheers,
Tim
Hi,
Please let me know in case you need any more details. Even for only write
operations fuse seems to outperform libgfapi. Is it because of disperse
volumes?. Also I noticed a lot of data loss in case I use libgfapi asyn I/O for
disperse volumes.
Thanks and Regards,
Ram
-Original
Hi Gluster List,
I'm trying to configure a 2 node GlusterFS replicated volume with CTDB
managing SMB failover and I'm wondering if the behaviour I am seeing is
normal...
If I am playing back a large video file from a client (both Windows and
Linux) mounting the SMB share and issue `ctdb moveip`
On 12/11/2015 09:37 AM, Ravishankar N wrote:
As I just tried replacing an arbiter volume using replace-brick and
it seems that it's now using the same amount of space as the other
bricks.
There is a bug in glusterfs 3.7.6 where if the glusterd gets restarted
for whatever reason, and you do
On 12/09/2015 07:03 AM, Srikanth Mampilakal wrote:
However, if I do dd to check the copy speed, I get the below result.
[root@ClientServer ~]# time sh -c "dd if=/dev/zero
of=/mnt/testmount/test.tmp bs=4k count=2 && sync"
2+0 records in
2+0 records out
8192 bytes (82 MB) copied,
Response inline.
- Original Message -
> From: "Srikanth Mampilakal"
> To: gluster-users@gluster.org
> Sent: Thursday, December 10, 2015 7:59:04 PM
> Subject: Re: [Gluster-users] Gluster - Performance issue while copying bulk
> files/folders
>
>
>
> Hi
Hi members,
Really appreciate if you can share your thoughts or any feedback for
resolving the slow copy issue
Regards
Srikanth
On 10-Dec-2015 2:12 AM, "Srikanth Mampilakal"
wrote:
> Hi,
>
>
> I have production gluster file service used as a shared storage where
Am 09.12.2015 um 22:33 schrieb Lindsay Mathieson:
On 10/12/2015 3:15 AM, Udo Giacomozzi wrote:
This were the commands executed on node #2 during step 6:
gluster volume add-brick "systems" replica 3
metal1:/data/gluster/systems
gluster volume heal "systems" full # to trigger
Hello,
I doesn't use version 3.7 in production.
I think need try new options (I don't know how it improve work with big
files but it should improve work with small files):
gluster v set prodcmsroot client.event-threads 4
gluster v set prodcmsroot server.event-threads 4
gluster v set prodcmsroot
12 matches
Mail list logo