Dear colleagues,
I have a lab with a bunch of virtual machines (the virtualization is
provided by KVM) running on the same physical host. 4 of these VMs are
working as a GlusterFS cluster and there's one more VM that works as a
client. I'll specify all the packages' versions in the ending of this
:41AM +0300, Vladimir Melnik wrote:
> Dear colleagues,
>
> I have a lab with a bunch of virtual machines (the virtualization is
> provided by KVM) running on the same physical host. 4 of these VMs are
> working as a GlusterFS cluster and there's one more VM that works as a
> cl
>
> Best Regards,
> Strahil NikolovOn Jul 3, 2019 11:39, Vladimir Melnik
> wrote:
> >
> > Dear colleagues,
> >
> > I have a lab with a bunch of virtual machines (the virtualization is
> > provided by KVM) running on the same physical host. 4 of these VMs are
th a fresh replica volume with 'virt' group applied ?
> Best Regards,Strahil Nikolov
> В сряда, 3 юли 2019 г., 19:18:18 ч. Гринуич+3, Vladimir Melnik
> написа:
>
> Thank you, it helped a little:
>
> $ for i in {1..5}; do { dd if=/dev/zero of=/mnt/glusterfs1/test.tmp b
SSDs and... 2MB/s on replica 3 volume on HDDs.
> Something to look at next week.
>
>
>
> --
> Dmitry Filonov
> Linux Administrator
> SBGrid Core | Harvard Medical School
> 250 Longwood Ave, SGM-114
> Boston, MA 02115
>
>
> On Wed, Jul 3, 2019 at 12:18 PM Vla
OK, I tweaked the virtualization parameters and now I have ~10 Gbit/s
between all the nodes.
$ iperf3 -c 10.13.1.16
Connecting to host 10.13.1.16, port 5201
[ 4] local 10.13.1.17 port 47242 connected to 10.13.1.16 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4]
think , it'related to the sync type of oflag.
> Do you have a raid controller on each brick , to immediate take the data into
> the cache ?
>
> Best Regards,
> Strahil NikolovOn Jul 3, 2019 23:15, Vladimir Melnik
> wrote:
> >
> > Indeed, I wouldn't be surprised