Re: [Gluster-users] Performance: lots of small files, hdd, nvme etc.

2023-05-01 Thread Hu Bert
Hi there, well... i know that you can read from the bricks themselves, but when there are 7 bricks with each 1/7 of the data - which one do you choose? ;-) Maybe ONE raid1 or raid10 and a replicate 3 performs better than a "Number of Bricks: 5 x 3 = 15" Distributed-Replicate... systems are under

Re: [Gluster-users] Performance: lots of small files, hdd, nvme etc.

2023-04-03 Thread gluster-users
hello you can read files from underlying filesystem first (ext4,xfs...), for ex: /srv/glusterfs//brick. as fall back you can check mounted glusterfs path, to heal missing local node entries. ex: /mnt/shared/www/... you need only to write to mount.glusterfs mount point. On 3/30/2023

Re: [Gluster-users] Performance: lots of small files, hdd, nvme etc.

2023-03-30 Thread Hu Bert
Hi Diego, > > Just an observation: is there a performance difference between a sw > > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) > Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. Maybe i was unprecise? md3 : active raid10 sdh1[7] sde1[4] sda1[0]

Re: [Gluster-users] Performance: lots of small files, hdd, nvme etc.

2023-03-30 Thread Diego Zuccato
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: Just an observation: is there a performance difference between a sw raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.

[Gluster-users] Performance: lots of small files, hdd, nvme etc.

2023-03-30 Thread Hu Bert
Hello there, as Strahil suggested a separate thread might be better. current state: - servers with 10TB hdds - 2 hdds build up a sw raid1 - each raid1 is a brick - so 5 bricks per server - Volume info (complete below): Volume Name: workdata Type: Distributed-Replicate Number of Bricks: 5 x 3 =

Re: [Gluster-users] Performance Questions - not only small files

2021-05-14 Thread Schlick Rupert
Dear Felix, as requested, volume info, xfs_info, fstab. Volume Name: gfs_scratch Type: Replicate Volume ID: d99b6154-bf34-49d6-a06b-0e29bfc2a0fb Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: server3:/data/glusterfs_scratch/gfs_scratch_brick1

Re: [Gluster-users] Performance Questions - not only small files

2021-05-14 Thread Felix Kölzow
Dear Rupert, can you provide gluster volume info volumeName and in addition the xfs_info  of your brick-mountpoints and furthermore cat /etc/fstab Regards, Felix Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge:

[Gluster-users] Performance Questions - not only small files

2021-05-14 Thread Schlick Rupert
Dear list, we have replicated gluster volumes much slower than the brick disks and I wonder if this is a configuration issue, a conceptual issue of our setup or really how slow gluster just is. The setup: * Three servers in a ring connected via IP over Infiniband, 100Gb/s on each link

Re: [Gluster-users] performance

2020-09-11 Thread Gionatan Danti
Il 2020-09-11 01:03 Computerisms Corporation ha scritto: Hi Danti, the notes are not very verbose, but looks like the following lines were removed from their virtualization config: They also enabled hyperthreading, so having 12 "cores" instead of 6 now. Guessing that had a lot to do

Re: [Gluster-users] performance

2020-09-10 Thread Computerisms Corporation
Hi Danti, the notes are not very verbose, but looks like the following lines were removed from their virtualization config: They also enabled hyperthreading, so having 12 "cores" instead of 6 now. Guessing that had a lot to do with it... On 2020-09-04 8:20 a.m., Gionatan Danti

Re: [Gluster-users] performance

2020-09-06 Thread Strahil Nikolov
Seems that this time your email went to yahoo's spam , I am still too lazy to get my own domain ... It is good to know that they fixed your issues. Best Regards, Strahil Nikolov В петък, 4 септември 2020 г., 02:00:32 Гринуич+3, Computerisms Corporation написа: Hi Strahil, For the

Re: [Gluster-users] performance

2020-09-04 Thread Gionatan Danti
Il 2020-09-04 01:00 Computerisms Corporation ha scritto: For the sake of completeness I am reporting back that your suspicions seem to have been validated. I talked to the data center, they made some changes. we talked again some days later, and they made some more changes, and for several

Re: [Gluster-users] performance

2020-09-03 Thread Computerisms Corporation
Hi Strahil, For the sake of completeness I am reporting back that your suspicions seem to have been validated. I talked to the data center, they made some changes. we talked again some days later, and they made some more changes, and for several days now load average on both machines is

Re: [Gluster-users] performance

2020-08-21 Thread Strahil Nikolov
На 20 август 2020 г. 3:46:41 GMT+03:00, Computerisms Corporation написа: >Hi Strahil, > >so over the last two weeks, the system has been relatively stable. I >have powered off both servers at least once, for about 5 minutes each >time. server came up, auto-healed what it needed to, so all

Re: [Gluster-users] performance

2020-08-21 Thread Computerisms Corporation
Hi Strahil, You can use 'virt-what' binary to find if and what type of Virtualization is used. cool, did not know about that. trouble server: root@moogle:/# virt-what hyperv kvm good server: root@mooglian:/# virt-what kvm I have a suspicion you are ontop of Openstack (which uses

Re: [Gluster-users] performance

2020-08-19 Thread Computerisms Corporation
Hi Strahil, so over the last two weeks, the system has been relatively stable. I have powered off both servers at least once, for about 5 minutes each time. server came up, auto-healed what it needed to, so all of that part is working as expected. will answer things inline and follow with

Re: [Gluster-users] performance

2020-08-12 Thread Artem Russakovskii
Hmm, in our case of running gluster across Linode block storage (which itself runs inside Ceph, as I found out), the only thing that helped with the hangs so far was defragmenting xfs. I tried changing many things, including the scheduler to "none" and this performance.write-behind-window-size

Re: [Gluster-users] performance

2020-08-07 Thread Computerisms Corporation
Hi Artem and others, Happy to report the system has been relatively stable for the remainder of the week. I have one wordpress site that seems to get hung processes when someone logs in with an incorrect password. Since it is only one, and reliably reproduceable, I am not sure if the issue

Re: [Gluster-users] performance

2020-08-05 Thread Strahil Nikolov
На 5 август 2020 г. 4:53:34 GMT+03:00, Computerisms Corporation написа: >Hi Strahil, > >thanks again for sticking with me on this. >> Hm... OK. I guess you can try 7.7 whenever it's possible. > >Acknowledged. > >>> Perhaps I am not understanding it correctly. I tried these >suggestions >>>

Re: [Gluster-users] performance

2020-08-05 Thread Artem Russakovskii
I'm very curious whether these improvements hold up over the next few days. Please report back. Sincerely, Artem -- Founder, Android Police , APK Mirror , Illogical Robot LLC beerpla.net | @ArtemR On Wed, Aug

Re: [Gluster-users] performance

2020-08-05 Thread Computerisms Corporation
Hi List, So, we just moved into a quieter time of the day, but maybe I just stumbled onto something.  I was trying to figure out if/how I could throw more RAM at the problem.  gluster docs says write behind is not a cache unless flush-behind is on.  So seems that is a way to throw ram to it? 

Re: [Gluster-users] performance

2020-08-04 Thread Strahil Nikolov
На 4 август 2020 г. 22:47:44 GMT+03:00, Computerisms Corporation написа: >Hi Strahil, thanks for your response. > >>> >>> I have compiled gluster 7.6 from sources on both servers. >> >> There is a 7.7 version which is fixing somw stuff. Why do you have >to compile it from source ? > >Because

Re: [Gluster-users] performance

2020-08-04 Thread Computerisms Corporation
Hi Strahil, thanks again for sticking with me on this. Hm... OK. I guess you can try 7.7 whenever it's possible. Acknowledged. Perhaps I am not understanding it correctly. I tried these suggestions before and it got worse, not better. so I have been operating under the assumption that

Re: [Gluster-users] performance

2020-08-04 Thread Computerisms Corporation
Hi Artem, would also like this recipe. If you have any comments on my answer to Strahil, would love to hear them... On 2020-08-03 9:42 p.m., Artem Russakovskii wrote: I tried putting all web files (specifically WordPress php and static files as well as various cache files) on gluster

Re: [Gluster-users] performance

2020-08-04 Thread Computerisms Corporation
Hi Strahil, thanks for your response. I have compiled gluster 7.6 from sources on both servers. There is a 7.7 version which is fixing somw stuff. Why do you have to compile it from source ? Because I have often found with other stuff in the past compiling from source makes a bunch of

Re: [Gluster-users] performance

2020-08-04 Thread Strahil Nikolov
На 4 август 2020 г. 6:01:17 GMT+03:00, Computerisms Corporation написа: >Hi Gurus, > >I have been trying to wrap my head around performance improvements on >my >gluster setup, and I don't seem to be making any progress. I mean >forward progress. making it worse takes practically no effort

Re: [Gluster-users] performance

2020-08-03 Thread Artem Russakovskii
I tried putting all web files (specifically WordPress php and static files as well as various cache files) on gluster before, and the results were miserable on a busy site - our usual ~8-10 load quickly turned into 100+ and killed everything. I had to go back to running just the user uploads

[Gluster-users] performance

2020-08-03 Thread Computerisms Corporation
Hi Gurus, I have been trying to wrap my head around performance improvements on my gluster setup, and I don't seem to be making any progress. I mean forward progress. making it worse takes practically no effort at all. My gluster is distributed-replicated across 6 bricks and 2 servers,

Re: [Gluster-users] Performance tuning suggestions for nvme on aws (Strahil)

2020-01-05 Thread Mohit Agrawal
Hi, Thanks for sharing the output. May be in the case of NVME there is no performance difference in SSL vs Non-SSL. Can you please share the output of the command "gluster v info" and what client you are using to mount the volume? If you are using fuse have u used any specific argument at the

Re: [Gluster-users] Performance tuning suggestions for nvme on aws (Strahil)

2020-01-05 Thread Michael Richardson
Hi Mohit and Strahil, Thank you for taking the time to share your advice. Strahil, unfortunately your message didn't come to my inbox, so I'm combining my reply to both yourself and Mohit. Mohit, I mentioned there was no performance difference between using SSL and not using SSL. I did try

Re: [Gluster-users] Performance tuning suggestions for nvme on aws (Strahil)

2020-01-05 Thread Mohit Agrawal
Hi, Along with previous tunning suggested by Strahil please configure " ssl.cipher-list" to AES128 for specific volume to improve the performance. As you mentioned you have configured SSL on a volume and the performance is drop in case of SSL. To improve the same please configure the AES128

[Gluster-users] Performance tuning suggestions for nvme on aws

2020-01-04 Thread Michael Richardson
Hi all! I'm experimenting with GFS for the first time have built a simple three-node cluster using AWS 'i3en' type instances. These instances provide raw nvme devices that are incredibly fast. What I'm finding in these tests is that gluster is offering only a fraction of the raw nvme performance

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-06 Thread David Spisla
I did another test with inode_size on xfs bricks=1024Bytes, but it had also no effect. Here is the measurement: (All values in MiB/s) 64KiB1MiB 10MiB 0,16 2,52 76,58 Beside of that I was not able to set the xattr trusted.io-stats-dump. I am wondering myself why it is not

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-06 Thread RAFI KC
On 11/6/19 3:42 PM, David Spisla wrote: Hello Rafi, I tried to set the xattr via setfattr -n trusted.io-stats-dump -v '/tmp/iostat.log' /gluster/repositories/repo1/ but it had no effect. There is no such a xattr via getfattr and no logfile. The command setxattr is not available. What I am

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-06 Thread David Spisla
Hello Rafi, I tried to set the xattr via setfattr -n trusted.io-stats-dump -v '/tmp/iostat.log' /gluster/repositories/repo1/ but it had no effect. There is no such a xattr via getfattr and no logfile. The command setxattr is not available. What I am doing wrong? By the way, you mean to increase

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-06 Thread Riccardo Murri
Dear Rafi, all, please find attached two profile files; both are profiling the same command: ``` time rsync -a $SRC root@172.23.187.207:/glusterfs ``` In both cases, the target is a Ubuntu 16.04 VM mounting a pure distributed GlusterFS 7 filesystem on `/glusterfs`. The GlusterFS 7 cluster is

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-05 Thread RAFI KC
On 11/5/19 4:53 PM, Riccardo Murri wrote: Is it possible for you to repeat the test by disabling ctime or increasing the inode size to a higher value say 1024? Sure! How do I disable ctime or increase the inode size? Would this suffice to disable `ctime`? sudo gluster volume set

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-05 Thread RAFI KC
I will take a look at the profile info shared. Since there is a huge difference in the performance numbers between fuse and samba, it would be great if we can get the profile info of fuse (on v7). This will help to compare the number of calls for each fops. There should be some fops that samba

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-05 Thread David Spisla
I did the test with Gluster 7.0 ctime disabled. But it had no effect: (All values in MiB/s) 64KiB1MiB 10MiB 0,16 2,60 54,74 Attached there is now the complete profile file also with the results from the last test. I will not repeat it with an higher inode size because I don't

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-05 Thread David Spisla
Am Di., 5. Nov. 2019 um 12:06 Uhr schrieb RAFI KC : > > On 11/4/19 8:46 PM, David Spisla wrote: > > Dear Gluster Community, > > I also have a issue concerning performance. The last days I updated our > test cluster from GlusterFS v5.5 to v7.0 . The setup in general: > > 2 HP DL380 Servers with

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-05 Thread Riccardo Murri
> > Is it possible for you to repeat the test by disabling ctime or increasing > > the inode size to a higher value say 1024? > > Sure! How do I disable ctime or increase the inode size? Would this suffice to disable `ctime`? sudo gluster volume set glusterfs ctime off Can it be done on a

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-05 Thread Riccardo Murri
Hello Rafi, many thanks for looking into this! > Is it possible for you to repeat the test by disabling ctime or increasing > the inode size to a higher value say 1024? Sure! How do I disable ctime or increase the inode size? Ciao, R Community Meeting Calendar: APAC Schedule -

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-05 Thread RAFI KC
On 11/4/19 8:46 PM, David Spisla wrote: Dear Gluster Community, I also have a issue concerning performance. The last days I updated our test cluster from GlusterFS v5.5 to v7.0 . The setup in general: 2 HP DL380 Servers with 10Gbit NICs, 1 Distribute-Replica 2 Volume with 2 Replica Pairs.

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-05 Thread RAFI KC
On 11/4/19 2:41 PM, Riccardo Murri wrote: Hello Amar, Can you please check the profile info [1] ? That may give some hints. I am attaching the output of `sudo gluster volume profile info` as a text file to preserve formatting. This covers the time from Friday night to Monday morning;

[Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-04 Thread David Spisla
Dear Gluster Community, I also have a issue concerning performance. The last days I updated our test cluster from GlusterFS v5.5 to v7.0 . The setup in general: 2 HP DL380 Servers with 10Gbit NICs, 1 Distribute-Replica 2 Volume with 2 Replica Pairs. Client is SMB Samba (access via vfs_glusterfs)

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-04 Thread Riccardo Murri
Hello Strahil, > You can set your mounts with 'noatime,nodiratime' options for better > performance. Thanks for the suggestion! I'll try that eventually, but I don't think `noatime` will make much difference on write-mostly workload. Thanks, R Community Meeting Calendar: APAC

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-04 Thread Riccardo Murri
Hello Amar, > Can you please check the profile info [1] ? That may give some hints. I am attaching the output of `sudo gluster volume profile info` as a text file to preserve formatting. This covers the time from Friday night to Monday morning; during this time the cluster has been the target

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-02 Thread Strahil
Hm... This seems to be cluster-wide effect than a single brick. In order to make things faster, can you remount (mount -o remount,noatime,nodiratime /gluster_brick/) on all bricks in the same volume and take the test again ? I think I saw your gluster bricks are mounted without these options.

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-01 Thread Riccardo Murri
Dear Strahil, > Have you noticed if slowness is only when accessing the files from > specific node ? I am copying a largest of image files into the GlusterFS volume -- slowness is on the aggregated performance (e.g., it takes ~300 minutes to copy 376GB worth of files). Given the high

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-01 Thread Riccardo Murri
Dear Amar, > Can you please check the profile info [1] ? That may give some hints. I have started profiling, will check what info has been collected on Monday. Many thanks for the suggestion! Riccardo Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-01 Thread Amar Tumballi
Hi Riccardo, Can you please check the profile info [1] ? That may give some hints. [1] - https://docs.gluster.org/en/latest/Administrator%20Guide/Monitoring%20Workload/ ? On Fri, 1 Nov, 2019, 9:55 AM Riccardo Murri, wrote: > Hello all, > > I have done some further testing and found out

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-01 Thread Strahil
I'm using replicated volumes. In your case, you got a distribited volume. Have you noticed if slowness is only when accessing the files from specific node ? Best Regards, Strahil NikolovOn Nov 1, 2019 17:28, Riccardo Murri wrote: > > Hello Strahil > > > What options do you use i  your

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-01 Thread Riccardo Murri
Hello Strahil > What options do you use i your cluster? I'm not sure what exact info you would like to see? Here's how clients mount the GlusterFS volume: ``` $ fgrep gluster /proc/mounts tp-glusterfs5:/glusterfs /net/glusterfs fuse.glusterfs

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-01 Thread Strahil
What options do you use i your cluster? Best Regards, Strahil NikolovOn Nov 1, 2019 06:24, Riccardo Murri wrote: > > Hello all, > > I have done some further testing and found out that I get the bad > performance with a freshly-installed cluster running 6.6. Also the > performance drop is

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-10-31 Thread Riccardo Murri
Hello all, I have done some further testing and found out that I get the bad performance with a freshly-installed cluster running 6.6. Also the performance drop is there with plain `rsync` into the GlusterFS mountpoint, so SAMBA plays no role in it. In other words, for my installations,

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-10-29 Thread Riccardo Murri
> In previous discussions it was confirmed by others that v5.5 is a little bit > slower than v3.12 , but I think that most of those issues were fixed in v6 . > What was the exact version you have? 6.5 according to the package version; op-version is 6. Thanks, Riccardo Community

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-10-29 Thread Strahil
In previous discussions it was confirmed by others that v5.5 is a little bit slower than v3.12 , but I think that most of those issues were fixed in v6 . What was the exact version you have? Best Regards, Strahil NikolovOn Oct 29, 2019 12:50, Riccardo Murri wrote: > > Hello Anoop, > > many

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-10-29 Thread Strahil
Hi Riccardo, You can set your mounts with 'noatime,nodiratime' options for better performance. Best Regards, Strahil NikolovOn Oct 29, 2019 12:50, Riccardo Murri wrote: > > Hello Anoop, > > many thanks for your fast reply!  My comments inline below: > > > > > [1]: I have tried both the

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-10-29 Thread Riccardo Murri
Hello Anoop, many thanks for your fast reply! My comments inline below: > > [1]: I have tried both the config where SAMBA 4.8 is using the > > vfs_glusterfs.so backend, and the one where `smbd` is just writing to > > a locally-mounted directory. Doesn't seem to make a difference. > > Samba

Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-10-29 Thread Anoop C S
On Tue, 2019-10-29 at 10:59 +0100, Riccardo Murri wrote: > Hello, > > I recently upgraded[2] our servers from GlusterFS 3.8 (old GlusterFS > repo for Ubuntu 16.04) to 6.0 (gotten from the GlusterFS PPA for > Ubuntu 16.04 "xenial"). > > The sustained write performance nearly dropped to half it

[Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-10-29 Thread Riccardo Murri
Hello, I recently upgraded[2] our servers from GlusterFS 3.8 (old GlusterFS repo for Ubuntu 16.04) to 6.0 (gotten from the GlusterFS PPA for Ubuntu 16.04 "xenial"). The sustained write performance nearly dropped to half it was before. We copy a large (a few 10'000s) number of image files (each 2

Re: [Gluster-users] performance - what can I expect

2019-05-02 Thread Amar Tumballi Suryanarayan
On Thu, May 2, 2019 at 1:21 PM Pascal Suter wrote: > Hi Amar > > thanks for rolling this back up. Actually i have done some more > benchmarking and fiddled with the config to finally reach a performance > figure i could live with. I now can squeeze about 3GB/s out of that server > which seems to

Re: [Gluster-users] performance - what can I expect

2019-05-02 Thread Pascal Suter
Hi Amar thanks for rolling this back up. Actually i have done some more benchmarking and fiddled with the config to finally reach a performance figure i could live with. I now can squeeze about 3GB/s out of that server which seems to be close to what i can get out of its network uplink

Re: [Gluster-users] performance - what can I expect

2019-05-01 Thread Amar Tumballi Suryanarayan
Hi Pascal, Sorry for complete delay in this one. And thanks for testing out in different scenarios. Few questions before others can have a look and advice you. 1. What is the volume info output ? 2. Do you see any concerning logs in glusterfs log files? 3. Please use `gluster volume profile`

Re: [Gluster-users] performance - what can I expect

2019-04-10 Thread Pascal Suter
i continued my testing with 5 clients, all attached over 100Gbit/s omni-path via IP over IB. when i run the same iozone benchmark across all 5 clients where gluster is mounted using the glusterfs client, i get an aggretated write throughput of only about 400GB/s and an aggregated read

Re: [Gluster-users] performance - what can I expect

2019-04-04 Thread Pascal Suter
I just noticed i left the most important parameters out :) here's the write command with filesize and recordsize in it as well :) ./iozone -i 0 -t 1 -F /mnt/gluster/storage/thread1 -+n -c -C -e -I -w -+S 0 -s 200G -r 16384k also i ran the benchmark without direct_io which resulted in an even

[Gluster-users] performance - what can I expect

2019-04-03 Thread Pascal Suter
Hi all I am currently testing gluster on a single server. I have three bricks, each a hardware RAID6 volume with thin provisioned LVM that was aligned to the RAID and then formatted with xfs. i've created a distributed volume so that entire files get distributed across my three bricks.

Re: [Gluster-users] Performance issue, need guidance

2019-01-22 Thread Strahil
I have just checked the archive and it seems that the diagram is missing, so I'm adding an URL link to it:https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?usp=sharingMy version is 3.12.15Best Regards,Strahil Nikolov___ Gluster-users

[Gluster-users] Performance question: Replicated with RAID0, or Distributed with RAID5?

2018-06-29 Thread Ernie Dunbar
Hi everyone. I have a question about performance, hoping that perhaps someone has already tested these scenarios so that I don't have to. In order to maximize a Gluster array's performance, which is faster: Gluster servers with 6 SAS disks each set up in a RAID0 configuration, letting Gluster

Re: [Gluster-users] Performance drop from 3.8 to 3.10

2017-09-22 Thread WK
Maybe next week we can all explore this. I'm on 3.10.5 and I don't have any complaints. Actually we are quite happy with the new clusters,  but these were green field installations that were built and then replaced our old 3.4 stuff. So we are still really enjoying the sharding and arbiter

Re: [Gluster-users] Performance drop from 3.8 to 3.10

2017-09-22 Thread Lindsay Mathieson
On 22/09/2017 1:21 PM, Krutika Dhananjay wrote: Could you disable cluster.eager-lock and try again? Thanks, but didn't seem to make any difference. Can't test anymore at the moment as down a server that hung on reboot :( -- Lindsay Mathieson

Re: [Gluster-users] Performance drop from 3.8 to 3.10

2017-09-22 Thread Krutika Dhananjay
Could you disable cluster.eager-lock and try again? -Krutika On Thu, Sep 21, 2017 at 6:31 PM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial > drop in read/write perfomance > > env: > > - 3 node, replica 3

[Gluster-users] Performance drop from 3.8 to 3.10

2017-09-21 Thread Lindsay Mathieson
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial drop in read/write perfomance env: - 3 node, replica 3 cluster - Private dedicated Network: 1Gx3, bond: balance-alb - was able to down the volume for the upgrade and reboot each node - Usage: VM Hosting (qemu) -

[Gluster-users] Performance testing with sysbench...

2017-08-22 Thread Krist van Besien
Hi all, I'm doing some performance test... If I test a simple sequential write using dd I get a thorughput of about 550 Mb/s. When I do a sequential write test using sysbench this drops to about 200. Is this due to the way sysbench tests? Or has in this case the performance of sysbench itself

[Gluster-users] Performance Translators Documentation

2017-07-11 Thread Christopher Schmidt
Hi, I had some issues (org.apache.lucene.index.CorruptIndexException) with Lucene (resp. ElasticSearch) working on a GlusterFS Volume and Kubernetes. For testing I switched off all performance translators... And I wonder if there is somewhere documentation, who they are and what they are doing?

[Gluster-users] Performance testing

2017-04-03 Thread Krist van Besien
Hi All, I build a Gluster 3.8.4 (RHGS 3.2) cluster for a customer, and I am having some issue demonstrating that it performs well. The customer compares it with his old NFS based NAS, and runs FIO to test workloads. What I notice is that FIO throughtput is only +-20Mb/s, which is not a lot.

[Gluster-users] Performance optimization

2017-03-17 Thread Gandalf Corvotempesta
Workload: VM hosting with sharding enabled, replica 3 (with or without distribution, see below) Which configuration will perform better: a) 1 ZFS disk per brick, 1 brick per server. 1 disk for each server. b) 1 ZFS mirror per brick, 1 brick per server. 1 disk for each server. c) 1 ZFS disk per

Re: [Gluster-users] Performance testing striped 4 volume

2017-01-05 Thread Cedric Lemarchand
It could be some extended attributes that still exists on folders brick{1.4}, you could either remove them with attr or simply remove/recreate them. Cheers, > On 5 Jan 2017, at 01:23, Zack Boll wrote: > > In performance testing a striped 4 volume, I appeared to have

Re: [Gluster-users] Performance testing striped 4 volume

2017-01-04 Thread Karan Sandha
Hi Zack, As the bricks had already been used before, gluster doesn't allow to create volume with same brick path until you use "force" at the end of the command. As you are doing performance testing i would recommend to clean the bricks and issue the same command. . sudo gluster volume

[Gluster-users] Performance testing striped 4 volume

2017-01-04 Thread Zack Boll
In performance testing a striped 4 volume, I appeared to have crashed glusterfs using version 3.8.7 on Ubuntu 16.04. I then stopped the volume and deleted it. I am now having trouble creating a new volume, below is output sudo gluster volume create gluster1 transport tcp

Re: [Gluster-users] Performance

2016-10-31 Thread Lindsay Mathieson
On 26/10/2016 2:50 AM, Service Mail wrote: 3x zfs raidz2 servers with a single gluster 3.8 replicated volume across a 10G network SSD slog? they make a big difference for sync writes If you have no slog try with zfs "sync=disabled" on all three pools. Not recommended for production, but

Re: [Gluster-users] Performance

2016-10-31 Thread Alex Crow
I last used GlusterFS around early 3.6. I could get great results for streaming large files. I was seeing up to 700MB/s with a DD test. Small file/metadata access wasn't right for our use, but for VMs it should be fine with a bit of tuning. On 31 October 2016 15:33:06 GMT+00:00, Joe Julian

Re: [Gluster-users] Performance

2016-10-31 Thread Joe Julian
On 10/31/2016 08:29 AM, Alastair Neil wrote: What version of Gluster? Are you using glusterfs or nfs mount? Any other traffic on the network, is the cluster quiescent apart from your dd test? What type of volume? It does seem slow. I have a three server cluster, using straight xfs over

Re: [Gluster-users] Performance

2016-10-31 Thread Alastair Neil
What version of Gluster? Are you using glusterfs or nfs mount? Any other traffic on the network, is the cluster quiescent apart from your dd test? It does seem slow. I have a three server cluster, using straight xfs over 10G with Gluster 3.8 and glusterfs mounts and I see: [root@sb-c

[Gluster-users] Performance

2016-10-28 Thread Service Mail
Hello, I have the following setup: 3x zfs raidz2 servers with a single gluster 3.8 replicated volume across a 10G network Everything is working fine however performance looks very poor to me: root@Client:/test_mount# sync; dd if=/dev/zero of=nfsp2 bs=1M count=1024; sync 1024+0 records in

[Gluster-users] Performance gap between clients

2016-10-23 Thread Pavel Szalbot
Hello everybody, I am experiencing peculiar performance difference on my client nodes. One node is blank Ubuntu (Xenial), second is also Xenial with a web server (nginx) that serves media files stored on disk image that is on Gluster volume. Both clients are 3.8.5, 10Gbe NICs used for Gluster

Re: [Gluster-users] performance issue in gluster volume

2016-05-24 Thread Anuradha Talur
- Original Message - > From: "Ramavtar" <ramavtar.rath...@everdata.com> > To: gluster-users@gluster.org > Sent: Friday, May 20, 2016 11:12:43 PM > Subject: [Gluster-users] performance issue in gluster volume > > Hi Ravi, > > I am using gluster

[Gluster-users] performance issue in gluster volume

2016-05-23 Thread Ramavtar
Hi Ravi, I am using gluster volume and it's having 2.7 TB data ( mp4 and jpeg files) with nginx webserver I am facing performance related issue with gluster volume please help me please find the gluster details : [root@webnode3 ~]# gluster --version glusterfs 3.7.11 built on Apr 27 2016

[Gluster-users] Performance tuning: How do I measure the performance of IMAP-on-Gluster?

2016-04-21 Thread Ernie Dunbar
Hi everyone. My Gluster cluster is finally behaving fairly well, CPU, disk and network performance has returned to a stable state, and I'd like to start doing some performance tuning. To do that though, we need to have some metrics to see if the changes we make, are making any difference at

[Gluster-users] Performance with Gluster+Fuse is 60x slower then Gluster+NFS ?

2016-02-17 Thread Van Renterghem Stijn
Hi, I have setup a server with a new installation of Gluster. The volume type is 'Replicate'. 1) I mounted the volume with Fuse IP1:/app /srv/data glusterfs defaults,_netdev,backupvolfile-server=IP2,fetch-attempts=2 0 0 When I start my application, it takes 2h

Re: [Gluster-users] Performance issues with one node

2015-07-30 Thread Mathieu Chateau
Hello, sorry operating-version is a variable like others, just need to find the good name: op-version: gluster volume get all cluster.op-version then to set version (global to all volumes): gluster volume set all cluster.op-version 30702 Cordialement, Mathieu CHATEAU http://www.lotp.fr

Re: [Gluster-users] Performance issues with one node

2015-07-28 Thread Mathieu Chateau
Hello, thanks for this guidance, I wasn't aware of! Any doc that describe all settings values ? For example, I can't find documentation for cluster.lookup-optimize Cordialement, Mathieu CHATEAU http://www.lotp.fr 2015-07-27 14:58 GMT+02:00 André Bauer aba...@magix.net: Some more infos:

Re: [Gluster-users] Performance issues with one node

2015-07-24 Thread Mathieu Chateau
Hello, gluster performance are not good with large number of small files. Recent version do a better job with them, but not yet what I would enjoy. As you are starting at gluster having an existing architecture, you should first setup a lab to learn about it Else you will learn the hard way.

[Gluster-users] Performance issues with one node

2015-07-24 Thread John Kennedy
I am new to Gluster and have not found anything useful from my friend Google. I have not dealt with physical hardware in a few years (my last few jobs have been VM's and AWS based) I inherited a 4 node gluster configuration. There are 2 bricks, one is 9TB the other 11TB. The 11TB brick has a

Re: [Gluster-users] performance tuning - list of available options?

2015-05-05 Thread Vijay Bellur
On 05/05/2015 04:34 PM, Kingsley wrote: On Tue, 2015-05-05 at 15:08 +0530, Vijay Bellur wrote: [snip] I have seen this before and it primarily seems to be related to the readdir calls done by git clone. Turning on these options might help to some extent: gluster volume set volname

Re: [Gluster-users] performance tuning - list of available options?

2015-05-05 Thread Vijay Bellur
On 05/05/2015 06:39 PM, Mohammed Rafi K C wrote: On 05/05/2015 05:24 PM, Vijay Bellur wrote: On 05/05/2015 04:34 PM, Kingsley wrote: On Tue, 2015-05-05 at 15:08 +0530, Vijay Bellur wrote: [snip] I have seen this before and it primarily seems to be related to the readdir calls done by git

Re: [Gluster-users] performance tuning - list of available options?

2015-05-05 Thread Mohammed Rafi K C
On 05/05/2015 05:24 PM, Vijay Bellur wrote: On 05/05/2015 04:34 PM, Kingsley wrote: On Tue, 2015-05-05 at 15:08 +0530, Vijay Bellur wrote: [snip] I have seen this before and it primarily seems to be related to the readdir calls done by git clone. Turning on these options might help to

Re: [Gluster-users] performance tuning - list of available options?

2015-05-05 Thread Ben Turner
- Original Message - From: Mohammed Rafi K C rkavu...@redhat.com To: Vijay Bellur vbel...@redhat.com, Kingsley glus...@gluster.dogwind.com, gluster-users@gluster.org, Raghavendra Gowdappa rgowd...@redhat.com Sent: Tuesday, May 5, 2015 9:09:31 AM Subject: Re: [Gluster-users

[Gluster-users] performance tuning - list of available options?

2015-05-05 Thread Kingsley
On Tue, 2015-05-05 at 15:08 +0530, Vijay Bellur wrote: [snip] I have seen this before and it primarily seems to be related to the readdir calls done by git clone. Turning on these options might help to some extent: gluster volume set volname performance.readdir-ahead on gluster volume

[Gluster-users] Performance and blocksize

2015-04-14 Thread Gregor Burck
Hi, I test diffrent blocksize on a local mount. with dd and a bs=1MB the performance is much better than bs=1KB, I read here that a to small blocksize is poisen for performance,... I plan to create a glustercluster for virtual image files, what should the best blocksize for the virtual

  1   2   3   4   >