well if you are addressing me, that was the point of my post re the
original posters complaint.
If his chosen test gets lousy or inconsistent results on non-gluster
setups then its hard to complain about gluster absent the known Gluster
issues (i.e. network bandwidth, fuse context switching,
I don't know what you are trying to test, but I'm sure this test doesn't show
anything meaningful.
Have you tested with your apps' workload ?
I have done your test and I get aprox 20MB/s, but I can asure you that the
performance is way better in my VMs.
Best Regards,
Strahil NikolovOn Jul 5,
On 7/4/2019 2:28 AM, Vladimir Melnik wrote:
So, the disk is OK and the network is OK, I'm 100% sure.
Seems to be a GlusterFS-related issue. Either something needs to be
tweaked or it's a normal performance for a replica-3 cluster.
There is more to it than Gluster on that particular test.
I
So your glusterfs is virtual...
I think that Red Hat mention about VMs on gluster
(https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/configuring_red_hat_enterprise_virtualization_with_red_hat_gluster_storage/optimizing_virtual_machines_on_red_hat_storage_volumes)
,
All 4 virtual machines working as nodes of the cluster are located on
the same physical server. The server has 6 SSD-modules and a
RAID-controller with a BBU. RAID level is 10, write-back cache is
enabled. Moreover, each node of the GlusterFS cluster shows normal
performance when it writes to the
I think , it'related to the sync type of oflag.
Do you have a raid controller on each brick , to immediate take the data into
the cache ?
Best Regards,
Strahil NikolovOn Jul 3, 2019 23:15, Vladimir Melnik wrote:
>
> Indeed, I wouldn't be surprised if I had around 80-100 MB/s, but 10-15
> MB/s
OK, I tweaked the virtualization parameters and now I have ~10 Gbit/s
between all the nodes.
$ iperf3 -c 10.13.1.16
Connecting to host 10.13.1.16, port 5201
[ 4] local 10.13.1.17 port 47242 connected to 10.13.1.16 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4]
Yeah, 10 Gbps is affordable these days, even 25 Gbps! Wouldn't go lower than
10 Gbps.
Get BlueMail for Android
On Jul 3, 2019, 16:59, at 16:59, Marcus Schopen wrote:
>Hi,
>
>Am Mittwoch, den 03.07.2019, 15:16 -0400 schrieb Dmitry Filonov:
>> Well, if your network is limited to 100MB/s then
Hi,
Am Mittwoch, den 03.07.2019, 15:16 -0400 schrieb Dmitry Filonov:
> Well, if your network is limited to 100MB/s then it doesn't matter if
> storage is capable of doing 300+MB/s.
> But 15 MB/s is still way less than 100 MB/s
What network is recommended in the backend, 10 Gigabit or better
Indeed, I wouldn't be surprised if I had around 80-100 MB/s, but 10-15
MB/s is really few. :-(
Even when I mount a filesystem on the same GlusterFS node, I have the
following result:
10485760 bytes (10 MB) copied, 0.409856 s, 25.6 MB/s
10485760 bytes (10 MB) copied, 0.38967 s, 26.9 MB/s
10485760
Well, if your network is limited to 100MB/s then it doesn't matter if
storage is capable of doing 300+MB/s.
But 15 MB/s is still way less than 100 MB/s
P.S. just tried on my gluster and found out that am getting ~15MB/s on
replica 3 volume on SSDs and... 2MB/s on replica 3 volume on HDDs.
Thank you, I tried to do that.
Created a new volume:
$ gluster volume create storage2 \
replica 3 \
arbiter 1 \
transport tcp \
gluster1.k8s.maitre-d.tucha.ua:/mnt/storage2/brick1 \
gluster2.k8s.maitre-d.tucha.ua:/mnt/storage2/brick2 \
Thank you, it helped a little:
$ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs1/test.tmp bs=1M
count=10 oflag=sync; rm -f /mnt/glusterfs1/test.tmp; } done 2>&1 | grep copied
10485760 bytes (10 MB) copied, 0.738968 s, 14.2 MB/s
10485760 bytes (10 MB) copied, 0.725296 s, 14.5 MB/s
13 matches
Mail list logo