I think , it'related to the sync type of oflag.
Do you have a raid controller on each brick , to immediate take the data into
the cache ?
Best Regards,
Strahil NikolovOn Jul 3, 2019 23:15, Vladimir Melnik wrote:
>
> Indeed, I wouldn't be surprised if I had around 80-100 MB/s, but 10-15
> MB/s
OK, I tweaked the virtualization parameters and now I have ~10 Gbit/s
between all the nodes.
$ iperf3 -c 10.13.1.16
Connecting to host 10.13.1.16, port 5201
[ 4] local 10.13.1.17 port 47242 connected to 10.13.1.16 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4]
Yeah, 10 Gbps is affordable these days, even 25 Gbps! Wouldn't go lower than
10 Gbps.
Get BlueMail for Android
On Jul 3, 2019, 16:59, at 16:59, Marcus Schopen wrote:
>Hi,
>
>Am Mittwoch, den 03.07.2019, 15:16 -0400 schrieb Dmitry Filonov:
>> Well, if your network is limited to 100MB/s then
Hi,
Am Mittwoch, den 03.07.2019, 15:16 -0400 schrieb Dmitry Filonov:
> Well, if your network is limited to 100MB/s then it doesn't matter if
> storage is capable of doing 300+MB/s.
> But 15 MB/s is still way less than 100 MB/s
What network is recommended in the backend, 10 Gigabit or better
Indeed, I wouldn't be surprised if I had around 80-100 MB/s, but 10-15
MB/s is really few. :-(
Even when I mount a filesystem on the same GlusterFS node, I have the
following result:
10485760 bytes (10 MB) copied, 0.409856 s, 25.6 MB/s
10485760 bytes (10 MB) copied, 0.38967 s, 26.9 MB/s
10485760
Very good, I will give this a try.
Thank you.
Carl
On 2019-07-03 3:56 p.m., John Strunk wrote:
Nope. Just:
* Ensure all volumes are fully healed so you don't run into split brain
* Go ahead and shutdown the server needing maintenance
** If you just want gluster down on that sever node: stop
Nope. Just:
* Ensure all volumes are fully healed so you don't run into split brain
* Go ahead and shutdown the server needing maintenance
** If you just want gluster down on that sever node: stop glusterd and kill
the glusterfs bricks, then do what you need to do
** If you just want to power off:
Hello,
Is there a way to mount a GlusterFS volume using FUSE on an BSD machine such as
OpenBSD?
If not, what is the alternative, I guess NFS?
Regards,
M.
___
Gluster-users mailing list
Gluster-users@gluster.org
I have a replica 3 cluster, 3 nodes with bricks and 2 "client" nodes,
that run the VMs through a mount of the data on the bricks.
Now, one of the bricks need maintenance and I will need to shut it down
for about 15 minutes.
I didn't find any information on what I am suposed to do.
If I get
Well, if your network is limited to 100MB/s then it doesn't matter if
storage is capable of doing 300+MB/s.
But 15 MB/s is still way less than 100 MB/s
P.S. just tried on my gluster and found out that am getting ~15MB/s on
replica 3 volume on SSDs and... 2MB/s on replica 3 volume on HDDs.
Thank you, I tried to do that.
Created a new volume:
$ gluster volume create storage2 \
replica 3 \
arbiter 1 \
transport tcp \
gluster1.k8s.maitre-d.tucha.ua:/mnt/storage2/brick1 \
gluster2.k8s.maitre-d.tucha.ua:/mnt/storage2/brick2 \
Thanks for reply, I can show this usage with lsof on some 3.12.15 clients :
# ps -ef | awk '/[g]lusterfs/ {print $2}' | xargs -n 1 lsof -p | grep -w 4007
glusterfs 28011 root7u IPv4 205739452 0t0 TCP
traitNetVM1:49148->glusterVMa:4007 (SYN_SENT)
glusterfs 49535 root7u
I am not aware of port 4007 usage in entire Gluster. Not aware of any
dependent projects too.
-Amar
On Wed, Jul 3, 2019 at 9:44 PM wrote:
> Does anybody have info about this port, is it mandatory, is there any way
> to disable it if so ?
>
> - Mail original -
> De: n...@furyweb.fr
> À:
Thank you, it helped a little:
$ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs1/test.tmp bs=1M
count=10 oflag=sync; rm -f /mnt/glusterfs1/test.tmp; } done 2>&1 | grep copied
10485760 bytes (10 MB) copied, 0.738968 s, 14.2 MB/s
10485760 bytes (10 MB) copied, 0.725296 s, 14.5 MB/s
Does anybody have info about this port, is it mandatory, is there any way to
disable it if so ?
- Mail original -
De: n...@furyweb.fr
À: "gluster-users"
Envoyé: Jeudi 20 Juin 2019 19:01:40
Objet: [Gluster-users] What is TCP/4007 for ?
I have several Gluster clients behind firewalls and
Am I alone having this problem ?
- Mail original -
De: n...@furyweb.fr
À: "gluster-users"
Envoyé: Vendredi 21 Juin 2019 09:48:47
Objet: [Gluster-users] Parallel process hang on gluster volume
I encounterd an issue on production servers using GlusterFS servers 5.1 and
clients 4.1.5 when
Just to be exact - here are the results of iperf3 measurements between the
client and one of servers:
$ iperf3 -c gluster1
Connecting to host gluster1, port 5201
[ 4] local 10.13.16.1 port 33156 connected to 10.13.1.16 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[
Dear colleagues,
I have a lab with a bunch of virtual machines (the virtualization is
provided by KVM) running on the same physical host. 4 of these VMs are
working as a GlusterFS cluster and there's one more VM that works as a
client. I'll specify all the packages' versions in the ending of this
Any idea?
- Kindest regards,
Milos Cuculovic
IT Manager
---
MDPI AG
Postfach, CH-4020 Basel, Switzerland
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland
Tel. +41 61 683 77 35
Fax +41 61 302 89 18
Email: cuculo...@mdpi.com
Skype: milos.cuculovic.mdpi
Disclaimer: The information and files
19 matches
Mail list logo