Re: [Gluster-users] Right way to use community Gluster on genuine RHEL?

2022-07-18 Thread Yaniv Kaul
On Mon, Jul 18, 2022 at 6:34 PM Thomas Cameron <
thomas.came...@camerontech.com> wrote:

> On 7/18/22 09:18, Péter Károly JUHÁSZ wrote:
> > The best would be officially pre built rpms for RHEL.
>
> Where are there official Red Hat Gluster 10 RPMs for RHEL?
>

There's no such thing. Let's not confuse the upstream Gluster project and
Red Hat product - RHGS (Red Hat Gluster Storage), which has a different
version[1] and lifecycle[2] than the project.
Red Hat does not build upstream project official RPMs for RHEL.

That being said, I'm somewhat surprised the CentOS RPMs don't work on RHEL
- is that indeed the case?
Y.

[1] https://access.redhat.com/solutions/543123
[2] https://access.redhat.com/support/policy/updates/rhs

>
> Thomas
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Odd "Transport endpoint is not connected" when trying to gunzip a file

2022-06-15 Thread Yaniv Kaul
On Wed, Jun 15, 2022 at 6:28 PM Pat Haley  wrote:

>
> Hi,
>
> We have a cluster whose common storage is a gluster volume consisting of 5
> bricks residing on 3 servers.
>
>- Gluster volume machines
>   - mseas-data2:  CentOS release 6.8 (Final)
>   - mseas-data3:  CentOS release 6.10 (Final)
>   - mseas-data4:  CentOS Linux release 7.9.2009 (Core)
>- Client machines
>   - CentOS Linux release 7.9.2009 (Core)
>
> More details on the gluster volume are included below.
>
> We were recently trying to gunzip a file on the gluster volume and got  a
> "Transport endpoint is not connected" even though every test we try shows
> that gluster is fully up and running fine.  We traced the file to brick 3
> in the server mseas-data3.  We have included the relevant portions of the
> various log files on the client (mseas) where we were running the gunzip
> command and the server hosting the file (mseas-data3) below the gluster
> information
>
> What can you suggest we do to further debug and/or solve this issue?
>
> Thanks
> Pat
>
> 
> Gluster volume information
> 
>
> ---
> gluster volume info
> -
>
> Volume Name: data-volume
> Type: Distribute
> Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
> Status: Started
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: mseas-data2:/mnt/brick1
> Brick2: mseas-data2:/mnt/brick2
> Brick3: mseas-data3:/export/sda/brick3
> Brick4: mseas-data3:/export/sdc/brick4
> Brick5: mseas-data4:/export/brick5
> Options Reconfigured:
> diagnostics.client-log-level: ERROR
> network.inode-lru-limit: 5
> performance.md-cache-timeout: 60
> performance.open-behind: off
> disperse.eager-lock: off
> auth.allow: *
> server.allow-insecure: on
> nfs.exports-auth-enable: on
> diagnostics.brick-sys-log-level: WARNING
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: off
> cluster.min-free-disk: 1%
>
> ---
> gluster volume status
> 
>
> Status of volume: data-volume
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> --
> Brick mseas-data2:/mnt/brick1   49154 0  Y
> 15978
> Brick mseas-data2:/mnt/brick2   49155 0  Y
> 15997
> Brick mseas-data3:/export/sda/brick349153 0  Y
> 14221
> Brick mseas-data3:/export/sdc/brick449154 0  Y
> 14240
> Brick mseas-data4:/export/brick549152 0  Y
> 50569
>
>
> ---
> gluster peer status
> -
>
> Number of Peers: 2
>
> Hostname: mseas-data3
> Uuid: b39d4deb-c291-437e-8013-09050c1fa9e3
> State: Peer in Cluster (Connected)
>
> Hostname: mseas-data4
> Uuid: 5c4d06eb-df89-4e5c-92e4-441fb401a9ef
> State: Peer in Cluster (Connected)
>
> ---
> glusterfs --version
> 
>
> glusterfs 3.7.11 built on Apr 18 2016 13:20:46
>

This is somewhat of an outdated version, I think it's best to upgrade (or
better - migrate?) to a newer version.
Y.

> Repository revision: git://git.gluster.com/glusterfs.git
> Copyright (c) 2006-2013 Red Hat, Inc. 
> 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser
> General Public License, version 3 or any later version (LGPLv3
> or later), or the GNU General Public License, version 2 (GPLv2),
> in all cases as published by the Free Software Foundation.
>
> 
> Relevant sections from log files
> 
>
> ---
> mseas: gdata.log
> -
>
> [2022-06-15 14:51:17.263858] C
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-data-volume-client-2:
> server 172.16.1.113:49153 has not responded in the last 42 seconds,
> disconnecting.
> [2022-06-15 14:51:17.264522] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x172)[0x7f84886a0202]
> (-->
> /usr/local/lib/libgfrpc.so.0(saved_frames_unwind+0x1c2)[0x7f848846c3e2]
> (--> /usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f848846c4de]
> (-->
> /usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7f848846dd2a]
> (--> /usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f848846e538]
> ) 0-data-volume-client-2: forced unwinding frame type(GlusterFS 3.3)
> op(READ(12)) called at 2022-06-15 14:49:52.113795 

Re: [Gluster-users] Announcing Gluster release 10.1

2022-02-15 Thread Yaniv Kaul
Responding to the original ask, but in a different way - we have been
experimenting a bit with building Gluster via Containers.
We have a main branch built for EL7 @
https://github.com/gluster/Gluster-Builds/actions/runs/1847522621 - can
someone give it a spin, give some feedback if it's useful, etc.?

TIA,
Y.


On Fri, Feb 11, 2022 at 11:51 AM Alan Orth  wrote:

> Hi,
>
> I don't see the GlusterFS 10.x packages for CentOS 7. Normally they are
> available from the storage SIG via a metapackage. For example, I'm
> currently using centos-release-gluster9. I thought I might just need to
> wait a few days, but now I am curious and thought I'd send a message to the
> list to check...
>
> Thanks,
>
> P.S. the gluster.org website still lists gluster9 as the latest:
> https://www.gluster.org/install/
>
> On Tue, Feb 1, 2022 at 8:34 AM Shwetha Acharya 
> wrote:
>
>> Hi All,
>>
>> The Gluster community is pleased to announce the release of Gluster10.1
>> Packages available at [1].
>> Release notes for the release can be found at [2].
>>
>>
>> *Highlights of Release:*- Fix missing stripe count issue with upgrade
>> from 9.x to 10.x
>> - Fix IO failure when shrinking distributed dispersed volume with ongoing
>> IO
>> - Fix log spam introduced with glusterfs 10.0
>> - Enable ltcmalloc_minimal instead of ltcmalloc
>>
>> NOTE: Please expect the CentOS 9 stream packages to land in the coming
>> days this week.
>>
>> Thanks,
>> Shwetha
>>
>> References:
>>
>> [1] Packages for 10.1:
>> https://download.gluster.org/pub/gluster/glusterfs/10/10.1/
>>
>> [2] Release notes for 10.1:
>> https://docs.gluster.org/en/latest/release-notes/10.1/
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> --
> Alan Orth
> alan.o...@gmail.com
> https://picturingjordan.com
> https://englishbulgaria.net
> https://mjanja.ch
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Announcing Gluster release 10.1

2022-02-13 Thread Yaniv Kaul
On Sun, 13 Feb 2022, 10:40 Strahil Nikolov  wrote:

> Not really.
> Debian take care for packaging on Debian, Ubuntu for their debs, OpenSuSE
> for their rpms,  CentOS is part of RedHat and they can decide for it.
>

CentOS is not part of Red Hat - we support it, as other companies,
organizations and individuals do.

All are welcome to join the CentOS Storage SIG[1] which takes care of
Gluster packaging.

Y.

[1] https://wiki.centos.org/SpecialInterestGroup/Storage


> Best Regards,
> Strahil Nikolov
>
> On Sun, Feb 13, 2022 at 10:13, Zakhar Kirpichenko
>  wrote:
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Gluster release 10.1

2022-02-13 Thread Yaniv Kaul
On Sun, Feb 13, 2022 at 10:13 AM Zakhar Kirpichenko 
wrote:

> A nice passive-aggressive comment from a @redhat.com. Kind of proves my
> point.
>

I will not argue with your interpretation, I'll simply state it was not my
intention.
I am unsure what point you are trying to get across, so let me reiterate my
ask: join the community - this is the best way to influence its direction.

BTW, it is not a specific issue with CentOS - we have a similar challenge
with 32bit platform support[1] - without active maintenance work, it breaks.
Y.

[1] https://github.com/gluster/glusterfs/issues/2979


>
> /Z
>
> On Sun, Feb 13, 2022 at 10:08 AM Yaniv Kaul  wrote:
>
>>
>>
>> On Sun, Feb 13, 2022 at 9:58 AM Zakhar Kirpichenko 
>> wrote:
>>
>>> > Maintenance updates != new feature releases (and never has).
>>>
>>> Thanks for this, but what's your point exactly? Feature updates for
>>> CentOS 7 ended in August 2020, 1.5 years ago. This did not affect the
>>> release of 8.x updates, or 9.x release and updates for CentOS 7. Dropping
>>> CentOS 7 builds from 10.x onwards seems more in line with what RedHat/IBM
>>> did to CentOS rather than with any kind of CentOS 7 updates.
>>>
>>
>> Do you have the resources to maintain the builds for EL 7?
>> If so, I'm sure the community would appreciate the effort and dedication
>> and will be happy to see this continued support.
>>
>> Personally, I would like to see CentOS 9 Stream support soon.
>> Y.
>>
>>
>>> /Z
>>>
>>> On Sun, Feb 13, 2022 at 9:38 AM Eliyahu Rosenberg <
>>> erosenb...@lightricks.com> wrote:
>>>
>>>> Maintenance updates != new feature releases (and never has).
>>>>
>>>> On Fri, Feb 11, 2022 at 2:14 PM Zakhar Kirpichenko 
>>>> wrote:
>>>>
>>>>> An interesting decision not to support GlusterFS 10.x on CentOS 7,
>>>>> which I'm sure is in use by many and will be supported with maintenance
>>>>> updates for another 2 years.
>>>>>
>>>>> /Z
>>>>>
>>>>> On Fri, Feb 11, 2022 at 12:16 PM Shwetha Acharya 
>>>>> wrote:
>>>>>
>>>>>> Hi Alan,
>>>>>>
>>>>>> Please refer to [1]
>>>>>> As per community guidelines, we will not be supporting Centos 7
>>>>>> GlusterFS 10 onwards.
>>>>>>
>>>>>> Also thanks for letting us know about
>>>>>> https://www.gluster.org/install/. We will update it the earliest.
>>>>>>
>>>>>> [1]
>>>>>> https://docs.gluster.org/en/latest/Install-Guide/Community-Packages/
>>>>>>
>>>>>> Regards,
>>>>>> Shwetha
>>>>>>
>>>>>> On Fri, Feb 11, 2022 at 3:21 PM Alan Orth 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I don't see the GlusterFS 10.x packages for CentOS 7. Normally they
>>>>>>> are available from the storage SIG via a metapackage. For example, I'm
>>>>>>> currently using centos-release-gluster9. I thought I might just need to
>>>>>>> wait a few days, but now I am curious and thought I'd send a message to 
>>>>>>> the
>>>>>>> list to check...
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> P.S. the gluster.org website still lists gluster9 as the latest:
>>>>>>> https://www.gluster.org/install/
>>>>>>>
>>>>>>> On Tue, Feb 1, 2022 at 8:34 AM Shwetha Acharya 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> The Gluster community is pleased to announce the release of
>>>>>>>> Gluster10.1
>>>>>>>> Packages available at [1].
>>>>>>>> Release notes for the release can be found at [2].
>>>>>>>>
>>>>>>>>
>>>>>>>> *Highlights of Release:*- Fix missing stripe count issue with
>>>>>>>> upgrade from 9.x to 10.x
>>>>>>>> - Fix IO failure when shrinking distributed dispersed volume with
>>>>>>>> ongoing IO
>>>>>>>> - Fix log spam introduced with glusterfs 10.0
&g

Re: [Gluster-users] Announcing Gluster release 10.1

2022-02-13 Thread Yaniv Kaul
On Sun, Feb 13, 2022 at 9:58 AM Zakhar Kirpichenko  wrote:

> > Maintenance updates != new feature releases (and never has).
>
> Thanks for this, but what's your point exactly? Feature updates for CentOS
> 7 ended in August 2020, 1.5 years ago. This did not affect the release of
> 8.x updates, or 9.x release and updates for CentOS 7. Dropping CentOS 7
> builds from 10.x onwards seems more in line with what RedHat/IBM did to
> CentOS rather than with any kind of CentOS 7 updates.
>

Do you have the resources to maintain the builds for EL 7?
If so, I'm sure the community would appreciate the effort and dedication
and will be happy to see this continued support.

Personally, I would like to see CentOS 9 Stream support soon.
Y.


> /Z
>
> On Sun, Feb 13, 2022 at 9:38 AM Eliyahu Rosenberg <
> erosenb...@lightricks.com> wrote:
>
>> Maintenance updates != new feature releases (and never has).
>>
>> On Fri, Feb 11, 2022 at 2:14 PM Zakhar Kirpichenko 
>> wrote:
>>
>>> An interesting decision not to support GlusterFS 10.x on CentOS 7, which
>>> I'm sure is in use by many and will be supported with maintenance updates
>>> for another 2 years.
>>>
>>> /Z
>>>
>>> On Fri, Feb 11, 2022 at 12:16 PM Shwetha Acharya 
>>> wrote:
>>>
 Hi Alan,

 Please refer to [1]
 As per community guidelines, we will not be supporting Centos 7
 GlusterFS 10 onwards.

 Also thanks for letting us know about https://www.gluster.org/install/. We
 will update it the earliest.

 [1]
 https://docs.gluster.org/en/latest/Install-Guide/Community-Packages/

 Regards,
 Shwetha

 On Fri, Feb 11, 2022 at 3:21 PM Alan Orth  wrote:

> Hi,
>
> I don't see the GlusterFS 10.x packages for CentOS 7. Normally they
> are available from the storage SIG via a metapackage. For example, I'm
> currently using centos-release-gluster9. I thought I might just need to
> wait a few days, but now I am curious and thought I'd send a message to 
> the
> list to check...
>
> Thanks,
>
> P.S. the gluster.org website still lists gluster9 as the latest:
> https://www.gluster.org/install/
>
> On Tue, Feb 1, 2022 at 8:34 AM Shwetha Acharya 
> wrote:
>
>> Hi All,
>>
>> The Gluster community is pleased to announce the release of
>> Gluster10.1
>> Packages available at [1].
>> Release notes for the release can be found at [2].
>>
>>
>> *Highlights of Release:*- Fix missing stripe count issue with
>> upgrade from 9.x to 10.x
>> - Fix IO failure when shrinking distributed dispersed volume with
>> ongoing IO
>> - Fix log spam introduced with glusterfs 10.0
>> - Enable ltcmalloc_minimal instead of ltcmalloc
>>
>> NOTE: Please expect the CentOS 9 stream packages to land in the
>> coming days this week.
>>
>> Thanks,
>> Shwetha
>>
>> References:
>>
>> [1] Packages for 10.1:
>> https://download.gluster.org/pub/gluster/glusterfs/10/10.1/
>>
>> [2] Release notes for 10.1:
>> https://docs.gluster.org/en/latest/release-notes/10.1/
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> --
> Alan Orth
> alan.o...@gmail.com
> https://picturingjordan.com
> https://englishbulgaria.net
> https://mjanja.ch
>
 



 Community Meeting Calendar:

 Schedule -
 Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
 Bridge: https://meet.google.com/cpu-eiue-hvk
 Gluster-users mailing list
 Gluster-users@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-users

>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Read from fastest node only

2021-07-28 Thread Yaniv Kaul
On Wed, Jul 28, 2021 at 5:50 AM David Cunningham 
wrote:

> Hi Yaniv,
>
> It may be my lack of knowledge, but I can't see how the fastest response
> time could differ from file to file. If that's true then it would be enough
> to only periodically test which node is fastest for this client, not having
> to do it for every single read.
>

In real life, the 'best' node is the one with the highest overall free
resources, across CPU, network and disk IO. So it could change and it might
change all the time.
Network, disk saturation might be common, disk performing garbage
collection, CPU being hogged by something, noisy neighbor, etc...

Our latency check is indeed not per file, AFAIK.
Y.


> Thanks for the tip about read-hash-mode. I see the help is as below. Value
> 4 may help, but not if the latency is tested for every file read. Value 0
> may help, but it depends how the children are ordered. Does anyone know
> more about how these work?
>
> Option: cluster.read-hash-mode
> Default Value: 1
> Description: inode-read fops happen only on one of the bricks in
> replicate. AFR will prefer the one computed using the method specified
> using this option.
> 0 = first readable child of AFR, starting from 1st child.
> 1 = hash by GFID of file (all clients use same subvolume).
> 2 = hash by GFID of file and client PID.
> 3 = brick having the least outstanding read requests.
> 4 = brick having the least network ping latency.
>
> Thanks again.
>
>
> On Tue, 27 Jul 2021 at 19:16, Yaniv Kaul  wrote:
>
>>
>>
>> On Tue, Jul 27, 2021 at 9:50 AM David Cunningham <
>> dcunning...@voisonics.com> wrote:
>>
>>> Hello,
>>>
>>> We have a replicated GlusterFS cluster, and my understanding is that the
>>> GlusterFS FUSE client will check the file with all nodes before doing a
>>> read.
>>>
>>> For our application it is not actually critical to be certain of having
>>> the latest version of a file, and it would be preferable to speed up the
>>> read by simply reading the file from the fastest node. This would be
>>> especially beneficial if some of the other nodes have higher latency from
>>> the client.
>>>
>>
>> How do you define, in real time, per file, which is the fastest node?
>> Maybe you are looking for read-hash-mode volume option?
>> Y.
>>
>>>
>>> Is it possible to do this? Thanks in advance for any assistance.
>>>
>>> --
>>> David Cunningham, Voisonics Limited
>>> http://voisonics.com/
>>> USA: +1 213 221 1092
>>> New Zealand: +64 (0)28 2558 3782
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Read from fastest node only

2021-07-27 Thread Yaniv Kaul
On Tue, Jul 27, 2021 at 9:50 AM David Cunningham 
wrote:

> Hello,
>
> We have a replicated GlusterFS cluster, and my understanding is that the
> GlusterFS FUSE client will check the file with all nodes before doing a
> read.
>
> For our application it is not actually critical to be certain of having
> the latest version of a file, and it would be preferable to speed up the
> read by simply reading the file from the fastest node. This would be
> especially beneficial if some of the other nodes have higher latency from
> the client.
>

How do you define, in real time, per file, which is the fastest node?
Maybe you are looking for read-hash-mode volume option?
Y.

>
> Is it possible to do this? Thanks in advance for any assistance.
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] distributed glusterfs volume of four ramdisks problems

2021-07-11 Thread Yaniv Kaul
On Sun, 11 Jul 2021, 23:59 Ewen Chan  wrote:

> Yaniv:
>
> I created a directory on a XFS formatted drive and that initially worked
> with tcp/inet.
>
> I then went to stop, delete, and tried to recreate the gluster volume with
> the option "transport tcp,rdma", it failed.
>

RDMA support was deprecated in recent releases.
Y.


> I had to use the force options for gluster to work.
>
> But then it failed when trying to mount the volume, but prior to this
> change, I was able to mount the glusterfs volume using tcp/inet only.
>
> But now when I try to re-create the volume with "transport tcp,rdma", it
> fails.
>
> When I try to recreate the volume without any arguments, it fails as well
> because it thinks that the mount point/folder/directory has already been
> associated with a previous gluster volume, which I don't know how to
> properly resolve and none of the official documentation on gluster.org
> explains how to deal with that.
>
> Thank you.
>
> Sincerely,
> Ewen
>
> --
> *From:* Yaniv Kaul 
> *Sent:* July 11, 2021 4:02 PM
> *To:* Ewen Chan 
> *Subject:* Re: [Gluster-users] distributed glusterfs volume of four
> ramdisks problems
>
> Can you try on a non tmpfs file system?
> Y.
>
> On Sun, 11 Jul 2021, 22:59 Ewen Chan  wrote:
>
> Strahil:
>
> I just tried to create an entirely new gluster volume, gv1, instead of
> trying to use gv0.
>
> Same error.
>
> # gluster volume create gv1 node{1..4}:/mnt/ramdisk/gv1
> volume create: gv1: success: please start the volume to access data
>
> When I tried to start the volume with:
>
> # gluster volume start gv1
>
> gluster responds with:
>
> volume start: gv1: failed: Commit failed on localhost. Please check log
> file for details.
>
> Attached are the updated glusterd.log and cli.log files.
>
> I checked and without specifying the options or the transport parameters,
> it defaults to using tcp/inet, but that still failed, so I am not really
> sure what's going on here.
>
> Thanks.
>
> Sincerely,
> Ewen
>
> --
> *From:* Strahil Nikolov 
> *Sent:* July 11, 2021 2:49 AM
> *To:* gluster-users@gluster.org ; Ewen Chan <
> alpha754...@hotmail.com>
> *Subject:* Re: [Gluster-users] distributed glusterfs volume of four
> ramdisks problems
>
> Does it crash with tcp ?
> What happens when you mount on one of the hosts ?
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 10 юли 2021 г., 18:55:40 ч. Гринуич+3, Ewen Chan <
> alpha754...@hotmail.com> написа:
>
>
>
>
>
>
>
> Hello everybody.
>
> I have a cluster with four nodes and I am trying to create a distributed
> glusterfs volume consisting of four RAM drives, each being 115 GB in size.
>
>
>
>
> I am running CentOS 7.7.1908.
>
>
>
>
> I created the ramdrives on each of the four nodes with the following
> command:
>
> # mount -t tmpfs -o size=115g tmpfs /mnt/ramdisk
>
>
>
>
> I then create the mount point for the gluster volume on each of the nodes:
>
>
> # mkdir -p /mnt/ramdisk/gv0
>
>
>
>
> And then I tried to create the glusterfs distributed volume:
>
> # gluster volume create gv0 transport tcp,rdma node{1..4}:/mnt/ramdisk/gv0
>
> And that came back with:
>
> volume create: gv0: success: pleas start the volume to access data
>
>
>
>
> When I tried to start the volume with:
>
>
> # gluster volume start gv0
>
>
>
> gluster responds with:
>
>
>
>
> volume start: gv0: failed: Commit failed on localhost. Please check log
> file for details.
>
>
>
>
> So I tried forcing the start with:
>
> # gluster volume start gv0 force
>
>
>
>
> gluster responds with:
>
>
>
>
> volume start: gv0: success
>
>
>
>
> I then created the mount point for the gluster volume:
>
> # mkdir -p /home/gluster
>
>
>
>
> And tried to mount the gluster gv0 volume:
>
> # mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0
> /home/gluster
>
>
>
>
> and the system crashes.
>
>
>
>
> After rebooting the system and switching users back to root, I get this:
>
> ABRT has detected 1 problem(s). For more info run: abrt-cli list --since
> 1625929899
>
>
>
>
> # abrt-cli list --since 1625929899
> id 2a8ae7a1207acc48a6fc4a6cd8c3c88ffcf431be
>
> reason: glusterfsd killed by SIGSEGV
>
> time:   Sat 10 Jul 2021 10:56:13 AM EDT
>
> cmdline:/usr/sbin/glusterfsd -s aes1 --volfile-id
> gv0.aes1.mnt-ramdisk-gv0 -p

Re: [Gluster-users] Ansible Resources

2021-04-04 Thread Yaniv Kaul
On Sat, Apr 3, 2021 at 11:21 PM Patrick Nixon  wrote:

> I'm looking into setting up ansible to build and maintain a couple of my
> clusters.
>
> I see several resources online and was wondering which ansible role /
> collection is considered the most featureful and stable.
>
> Ansible's built in gluster_volume  but that doesn't install the client or
> anything
> https://github.com/geerlingguy/ansible-for-devops/tree/master/gluster
> seems to do everything but is 8 months since last update
> https://github.com/gluster/gluster-ansible doesn't seem to install the
> necessary packages
>

https://github.com/gluster/gluster-ansible-repositories should install the
packages.
Y.

>
> My clusters are pretty simple.   Distribute setups with some basic options
> and mounted from a couple of servers.
>
> Thanks for any guidance!
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-23 Thread Yaniv Kaul
On Tue, Mar 23, 2021 at 10:02 AM Diego Zuccato 
wrote:

> Il 22/03/21 16:54, Erik Jacobson ha scritto:
>
> > So if you had 24 leaders like HLRS, there would be 8 replica-3 at the
> > bottom layer, and then distributed across. (replicated/distributed
> > volumes)
> I still have to grasp the "leader node" concept.
> Weren't gluster nodes "peers"? Or by "leader" you mean that it's
> mentioned in the fstab entry like
> /l1,l2,l3:gv0 /mnt/gv0 glusterfs defaults 0 0
> while the peer list includes l1,l2,l3 and a bunch of other nodes?
>
> > So we would have 24 leader nodes, each leader would have a disk serving
> > 4 bricks (one of which is simply a lock FS for CTDB, one is sharded,
> > one is for logs, and one is heavily optimized for non-object expanded
> > tree NFS). The term "disk" is loose.
> That's a system way bigger than ours (3 nodes, replica3arbiter1, up to
> 36 bricks per node).
>
> > Specs of a leader node at a customer site:
> >  * 256G RAM
> Glip! 256G for 4 bricks... No wonder I have had troubles running 26
> bricks in 64GB RAM... :)
>

If you can recompile Gluster, you may want to experiment with disabling
memory pools - this should save you some memory.
Y.

>
> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Poor performance on a server-class system vs. desktop

2020-11-26 Thread Yaniv Kaul
On Thu, Nov 26, 2020 at 2:31 PM Dmitry Antipov  wrote:

> On 11/26/20 12:49 PM, Yaniv Kaul wrote:
>
> > I run a slightly different command, which hides the kernel stuff and
> focuses on the user mode functions:
> > sudo perf record --call-graph dwarf -j any --buildid-all --all-user -p
> `pgrep -d\, gluster` -F 2000 -ag
>
> Thanks.
>
> BTW, how much is an overhead of passing data between xlators? Even if the
> most of their features
> are disabled, just passing through all of the below is unlikely to have
> near-to-zero overhead:
>

Very good question. I was always suspicious of that flow, and I do believe
we could do some optimizations, but here's the response I've received back
then:
Here's some data from some tests I was running last week - the avg
round-trip
time spent by fops in the brick stack from the top translator io-stats till
posix before it is executed on-disk is less than 20 microseconds. And this
stack includes both translators that are enabled and used in RHHI as well as
the do-nothing xls you mention. In contrast, the round-trip time spent by
these
fops between the client and server translator is of the order of a few
hundred
microseconds to sometimes even 1ms.


> Thread 14 (Thread 0x7f2c0e7fc640 (LWP 19482) "glfs_rpcrqhnd"):
> #0  data_unref (this=0x7f2bfc032e68) at dict.c:768
> #1  0x7f2c290b90b9 in dict_deln (keylen=,
> key=0x7f2c163d542e "glusterfs.inodelk-dom-count", this=0x7f2bfc0bb1c8) at
> dict.c:645
> #2  dict_deln (this=0x7f2bfc0bb1c8, key=0x7f2c163d542e
> "glusterfs.inodelk-dom-count", keylen=) at dict.c:614
> #3  0x7f2c163c87ee in pl_get_xdata_requests (local=0x7f2bfc0ea658,
> xdata=0x7f2bfc0bb1c8) at posix.c:238
> #4  0x7f2c163b3267 in pl_get_xdata_requests (xdata=0x7f2bfc0bb1c8,
> local=) at posix.c:213
>

For example, https://github.com/gluster/glusterfs/issues/1707
optimizes pl_get_xdata_requests()
a bit.
Y.

#5  pl_writev (frame=0x7f2bfc0d5348, this=0x7f2c08014830,
> fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1, offset=108306432,
> flags=0, iobref=0x7f2c080820d0, xdata=0x7f2bfc0bb1c8) at posix.c:2299
> #6  0x7f2c16395e31 in worm_writev (frame=0x7f2bfc0d5348,
> this=, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> offset=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at worm.c:429
> #7  0x7f2c1638a55f in ro_writev (frame=frame@entry=0x7f2bfc0d5348,
> this=, fd=fd@entry=0x7f2bfc0bc768, 
> vector=vector@entry=0x7f2bfc105478,
> count=count@entry=1,
> off=off@entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at read-only-common.c:374
> #8  0x7f2c163705ac in leases_writev (frame=frame@entry=0x7f2bfc0bf148,
> this=0x7f2c0801a230, fd=fd@entry=0x7f2bfc0bc768, 
> vector=vector@entry=0x7f2bfc105478,
> count=count@entry=1,
> off=off@entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at leases.c:132
> #9  0x7f2c1634f6a8 in up_writev (frame=0x7f2bfc067508,
> this=0x7f2c0801bf00, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0, xdata=0x7f2bfc0bb1c8)
> at upcall.c:124
> #10 0x7f2c2913e6c2 in default_writev (frame=0x7f2bfc067508,
> this=, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at defaults.c:2550
> #11 0x7f2c2913e6c2 in default_writev (frame=frame@entry=0x7f2bfc067508,
> this=, fd=fd@entry=0x7f2bfc0bc768, 
> vector=vector@entry=0x7f2bfc105478,
> count=count@entry=1,
> off=off@entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at defaults.c:2550
> #12 0x7f2c16315eb7 in marker_writev (frame=frame@entry=0x7f2bfc119e48,
> this=this@entry=0x7f2c08021440, fd=fd@entry=0x7f2bfc0bc768,
> vector=vector@entry=0x7f2bfc105478, count=count@entry=1,
> offset=offset@entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at marker.c:940
> #13 0x7f2c162fc0ab in barrier_writev (frame=0x7f2bfc119e48,
> this=, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at barrier.c:248
> #14 0x7f2c2913e6c2 in default_writev (frame=0x7f2bfc119e48,
> this=, fd=0x7f2bfc0bc768, vector=0x7f2bfc105478, count=1,
> off=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at defaults.c:2550
> #15 0x7f2c162c5cda in quota_writev (frame=frame@entry=0x7f2bfc119e48,
> this=, fd=fd@entry=0x7f2bfc0bc768, 
> vector=vector@entry=0x7f2bfc105478,
> count=count@entry=1,
> off=off@entry=108306432, flags=0, iobref=0x7f2c080820d0,
> xdata=0x7f2bfc0bb1c8) at quota.c:1947
> #16 0x7f2c16299c89 in io_stats_writev (frame=frame@entry=0x7f2bfc0e4358,
> this=this@entry=0x7f2c08029df0, fd=0x7f2bf

Re: [Gluster-users] Poor performance on a server-class system vs. desktop

2020-11-26 Thread Yaniv Kaul
On Thu, Nov 26, 2020 at 11:44 AM Dmitry Antipov  wrote:

> BTW, did someone try to profile the brick process? I do, and got this
> for the default replica 3 volume ('perf record -F 2500 -g -p [PID]'):
>

I run a slightly different command, which hides the kernel stuff and
focuses on the user mode functions:
sudo perf record --call-graph dwarf -j any --buildid-all --all-user -p
`pgrep -d\, gluster` -F 2000 -ag
Y.


> +3.29% 0.02%  glfs_epoll001[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +3.17% 0.01%  glfs_epoll001[kernel.kallsyms]  [k]
> do_syscall_64
> +3.17% 0.02%  glfs_epoll000[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +3.06% 0.02%  glfs_epoll000[kernel.kallsyms]  [k]
> do_syscall_64
> +2.75% 0.01%  glfs_iotwr00f[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.74% 0.01%  glfs_iotwr00b[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.74% 0.01%  glfs_iotwr001[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.73% 0.00%  glfs_iotwr003[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.72% 0.00%  glfs_iotwr000[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.72% 0.01%  glfs_iotwr00c[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.70% 0.01%  glfs_iotwr003[kernel.kallsyms]  [k]
> do_syscall_64
> +2.69% 0.00%  glfs_iotwr001[kernel.kallsyms]  [k]
> do_syscall_64
> +2.69% 0.01%  glfs_iotwr008[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.68% 0.00%  glfs_iotwr00b[kernel.kallsyms]  [k]
> do_syscall_64
> +2.68% 0.00%  glfs_iotwr00c[kernel.kallsyms]  [k]
> do_syscall_64
> +2.68% 0.00%  glfs_iotwr00f[kernel.kallsyms]  [k]
> do_syscall_64
> +2.68% 0.01%  glfs_iotwr000[kernel.kallsyms]  [k]
> do_syscall_64
> +2.67% 0.00%  glfs_iotwr00a[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.65% 0.00%  glfs_iotwr008[kernel.kallsyms]  [k]
> do_syscall_64
> +2.64% 0.00%  glfs_iotwr00e[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.64% 0.01%  glfs_iotwr00d[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.63% 0.01%  glfs_iotwr00a[kernel.kallsyms]  [k]
> do_syscall_64
> +2.63% 0.01%  glfs_iotwr007[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.63% 0.00%  glfs_iotwr005[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.63% 0.01%  glfs_iotwr006[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.63% 0.00%  glfs_iotwr009[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.61% 0.01%  glfs_iotwr004[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.61% 0.01%  glfs_iotwr00e[kernel.kallsyms]  [k]
> do_syscall_64
> +2.60% 0.00%  glfs_iotwr006[kernel.kallsyms]  [k]
> do_syscall_64
> +2.59% 0.00%  glfs_iotwr005[kernel.kallsyms]  [k]
> do_syscall_64
> +2.59% 0.00%  glfs_iotwr00d[kernel.kallsyms]  [k]
> do_syscall_64
> +2.58% 0.00%  glfs_iotwr002[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +2.58% 0.01%  glfs_iotwr007[kernel.kallsyms]  [k]
> do_syscall_64
> +2.58% 0.00%  glfs_iotwr004[kernel.kallsyms]  [k]
> do_syscall_64
> +2.57% 0.00%  glfs_iotwr009[kernel.kallsyms]  [k]
> do_syscall_64
> +2.54% 0.00%  glfs_iotwr002[kernel.kallsyms]  [k]
> do_syscall_64
> +1.65% 0.00%  glfs_epoll000[unknown]  [k]
> 0x0001
> +1.65% 0.00%  glfs_epoll001[unknown]  [k]
> 0x0001
> +1.48% 0.01%  glfs_rpcrqhnd[kernel.kallsyms]  [k]
> entry_SYSCALL_64_after_hwframe
> +1.44% 0.08%  glfs_rpcrqhndlibpthread-2.32.so [.]
> pthread_cond_wait@@GLIBC_2.3.2
> +1.40% 0.01%  glfs_rpcrqhnd[kernel.kallsyms]  [k]
> do_syscall_64
> +1.36% 0.01%  glfs_rpcrqhnd[kernel.kallsyms]  [k]
> __x64_sys_futex
> +1.35% 0.03%  glfs_rpcrqhnd[kernel.kallsyms]  [k] do_futex
> +1.34% 0.01%  glfs_iotwr00alibpthread-2.32.so [.]
> __libc_pwrite64
> +1.32% 0.00%  glfs_iotwr00a[kernel.kallsyms]  [k]
> __x64_sys_pwrite64
> +1.32% 0.00%  glfs_iotwr001libpthread-2.32.so [.]
> __libc_pwrite64
> +1.31% 0.01%  glfs_iotwr002libpthread-2.32.so [.]
> __libc_pwrite64
> +1.31% 0.00%  glfs_iotwr00blibpthread-2.32.so [.]
> __libc_pwrite64
> +1.31% 0.01%  glfs_iotwr00a[kernel.kallsyms]  [k] vfs_write
> +1.30% 0.00%  glfs_iotwr001[kernel.kallsyms]  [k]
> __x64_sys_pwrite64
> +1.30% 0.00%  

Re: [Gluster-users] How to find out what GlusterFS is doing

2020-11-05 Thread Yaniv Kaul
On Thu, Nov 5, 2020 at 4:18 PM mabi  wrote:

> Below is the top output of running "top -bHd d" on one of the nodes, maybe
> that can help to see what that glusterfsd process is doing?
>
>   PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
>  4375 root  20   0 2856784 120492   8360 D 61.1  0.4 117:09.29
> glfs_iotwr001
>

Waiting for IO, just like the rest of those in D state.
You may have a slow storage subsystem. How many cores do you have, btw?
Y.

 4385 root  20   0 2856784 120492   8360 R 61.1  0.4 117:12.92
> glfs_iotwr003
>  4387 root  20   0 2856784 120492   8360 R 61.1  0.4 117:32.19
> glfs_iotwr005
>  4388 root  20   0 2856784 120492   8360 R 61.1  0.4 117:28.87
> glfs_iotwr006
>  4391 root  20   0 2856784 120492   8360 D 61.1  0.4 117:20.71
> glfs_iotwr008
>  4395 root  20   0 2856784 120492   8360 D 61.1  0.4 117:17.22
> glfs_iotwr009
>  4405 root  20   0 2856784 120492   8360 R 61.1  0.4 117:19.52
> glfs_iotwr00d
>  4406 root  20   0 2856784 120492   8360 R 61.1  0.4 117:29.51
> glfs_iotwr00e
>  4366 root  20   0 2856784 120492   8360 D 55.6  0.4 117:27.58
> glfs_iotwr000
>  4386 root  20   0 2856784 120492   8360 D 55.6  0.4 117:22.77
> glfs_iotwr004
>  4390 root  20   0 2856784 120492   8360 D 55.6  0.4 117:26.49
> glfs_iotwr007
>  4396 root  20   0 2856784 120492   8360 R 55.6  0.4 117:23.68
> glfs_iotwr00a
>  4376 root  20   0 2856784 120492   8360 D 50.0  0.4 117:36.17
> glfs_iotwr002
>  4397 root  20   0 2856784 120492   8360 D 50.0  0.4 117:11.09
> glfs_iotwr00b
>  4403 root  20   0 2856784 120492   8360 R 50.0  0.4 117:26.34
> glfs_iotwr00c
>  4408 root  20   0 2856784 120492   8360 D 50.0  0.4 117:27.47
> glfs_iotwr00f
>  9814 root  20   0 2043684  75208   8424 D 22.2  0.2  50:15.20
> glfs_iotwr003
> 28131 root  20   0 2043684  75208   8424 R 22.2  0.2  50:07.46
> glfs_iotwr004
>  2208 root  20   0 2043684  75208   8424 R 22.2  0.2  49:32.70
> glfs_iotwr008
>  2372 root  20   0 2043684  75208   8424 R 22.2  0.2  49:52.60
> glfs_iotwr009
>  2375 root  20   0 2043684  75208   8424 D 22.2  0.2  49:54.08
> glfs_iotwr00c
>   767 root  39  19   0  0  0 R 16.7  0.0  67:50.83
> dbuf_evict
>  4132 onadmin   20   0   45292   4184   3176 R 16.7  0.0   0:00.04 top
> 28484 root  20   0 2043684  75208   8424 R 11.1  0.2  49:41.34
> glfs_iotwr005
>  2376 root  20   0 2043684  75208   8424 R 11.1  0.2  49:49.49
> glfs_iotwr00d
>  2719 root  20   0 2043684  75208   8424 R 11.1  0.2  49:58.61
> glfs_iotwr00e
>  4384 root  20   0 2856784 120492   8360 S  5.6  0.4   4:01.27
> glfs_rpcrqhnd
>  3842 root  20   0 2043684  75208   8424 S  5.6  0.2   0:30.12
> glfs_epoll001
> 1 root  20   0   57696   7340   5248 S  0.0  0.0   0:03.59 systemd
> 2 root  20   0   0  0  0 S  0.0  0.0   0:09.57 kthreadd
> 3 root  20   0   0  0  0 S  0.0  0.0   0:00.16
> ksoftirqd/0
> 5 root   0 -20   0  0  0 S  0.0  0.0   0:00.00
> kworker/0:0H
> 7 root  20   0   0  0  0 S  0.0  0.0   0:07.36
> rcu_sched
> 8 root  20   0   0  0  0 S  0.0  0.0   0:00.00 rcu_bh
> 9 root  rt   0   0  0  0 S  0.0  0.0   0:00.03
> migration/0
>10 root   0 -20   0  0  0 S  0.0  0.0   0:00.00
> lru-add-drain
>11 root  rt   0   0  0  0 S  0.0  0.0   0:00.01
> watchdog/0
>12 root  20   0   0  0  0 S  0.0  0.0   0:00.00 cpuhp/0
>13 root  20   0   0  0  0 S  0.0  0.0   0:00.00 cpuhp/1
>
> Any clues anyone?
>
> The load is really high around 20 now on the two nodes...
>
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, November 5, 2020 11:50 AM, mabi  wrote:
>
> > Hello,
> >
> > I have a 3 node replica including arbiter GlusterFS 7.8 server with 3
> volumes and the two nodes (not arbiter) seem to have a high load due to the
> glusterfsd brick process taking all CPU resources (12 cores).
> >
> > Checking these two servers with iostat command shows that the disks are
> not so busy and that they are mostly doing writes activity. On the FUSE
> clients there is not so much activity so I was wondering how to find out or
> explain why GlusterFS is currently generating such a high load on these two
> servers (the arbiter does not show any high load). There are no files
> currently healing either. This volume is the only volume which has the
> quota enabled if this might be a hint. So does anyone know how to see why
> GlusterFS is so busy on a specific volume?
> >
> > Here is a sample "vmstat 60" of one of the nodes:
> >
> > onadmin@gfs1b:~$ vmstat 60
> > procs ---memory-- ---swap-- -io -system--
> --cpu-
> > r b swpd free buff cache si so bi bo in cs us sy id wa st
> > 9 2 0 22296776 32004 260284 0 0 33 301 153 39 2 60 36 2 0
> > 13 0 0 22244540 32048 260456 0 0 343 2798 10898 367652 2 80 

Re: [Gluster-users] problems with heal.

2020-10-15 Thread Yaniv Kaul
On Thu, Oct 15, 2020 at 4:04 PM Alvin Starr  wrote:

> We are running glusterfs-server-3.8.9-1.el7.x86_64
>

This was released >3.5 years ago. Any plans to upgrade?
Y.

>
> If there is any more info you need I am happy to provide it.
>
>
> gluster v info SYCLE-PROD-EDOCS:
>
> Volume Name: SYCLE-PROD-EDOCS
> Type: Replicate
> Volume ID: ada836a4-1456-4d7a-a00f-934038669127
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: edocs3:/bricks/sycle-prod/data
> Brick2: edocs4:/bricks/sycle-prod/data
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: off
> nfs.disable: on
> client.event-threads: 8
> features.bitrot: on
> features.scrub: Active
> features.scrub-freq: weekly
> features.scrub-throttle: normal
>
>
> gluster v status SYCLE-PROD-EDOCS:
>
> Volume Name: SYCLE-PROD-EDOCS
> Type: Replicate
> Volume ID: ada836a4-1456-4d7a-a00f-934038669127
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: edocs3:/bricks/sycle-prod/data
> Brick2: edocs4:/bricks/sycle-prod/data
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: off
> nfs.disable: on
> client.event-threads: 8
> features.bitrot: on
> features.scrub: Active
> features.scrub-freq: weekly
> features.scrub-throttle: normal
> [root@edocs4 .glusterfs]# cat /tmp/gstatus
> Status of volume: SYCLE-PROD-EDOCS
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> --
> Brick edocs3:/bricks/sycle-prod/data49160 0  Y
> 51434
> Brick edocs4:/bricks/sycle-prod/data49178 0  Y
> 25053
> Self-heal Daemon on localhost   N/A   N/AY
> 25019
> Bitrot Daemon on localhost  N/A   N/AY
> 25024
> Scrubber Daemon on localhostN/A   N/AY
> 25039
> Self-heal Daemon on edocs3  N/A   N/AY
> 40404
> Bitrot Daemon on edocs3 N/A   N/AY
> 40415
> Scrubber Daemon on edocs3   N/A   N/AY
> 40426
>
> Task Status of Volume SYCLE-PROD-EDOCS
>
> --
> There are no active volume tasks
>
> gluster v heal SYCLE-PROD-EDOCS info:
>
> Brick edocs3:/bricks/sycle-prod/data
> 
> 
> 
> 
> 
> [sniped for brevity ]
> 
> 
> 
> 
> 
> Status: Connected
> Number of entries: 589
>
> Brick edocs4:/bricks/sycle-prod/data
> Status: Connected
> Number of entries: 0
>
>
>
> On 10/15/20 2:29 AM, Ashish Pandey wrote:
>
> It will require much more information than what you have provided to fix
> this issue.
>
> gluster v  info
> gluster v  status
> gluster v  heal info
>
> This is mainly to understand what is the volume type and what is current
> status of bricks.
> Knowing that, we can come up with next set of steps ti debug and fix the
> issue.
>
> Note: Please hide/mask hostname/Ip or any other confidential information
> in above output.
>
> ---
> Ashish
>
> --
> *From: *"Alvin Starr"  
> *To: *"gluster-user" 
> 
> *Sent: *Wednesday, October 14, 2020 10:45:10 PM
> *Subject: *[Gluster-users] problems with heal.
>
> We are running a smiple 2 server gluster cluster with a large number of
> small files.
>
> We had a problem where the clients lost connection to one of the servers
> and forced the system to run constantly self healing.
> We have since fixed the problem but now I have about 600 files that will
> not self heal.
>
> Is there any way to manually correct the problem?
>
> --
> Alvin Starr   ||   land:  (647)478-6285
> Netvel Inc.   ||   Cell:  (416)806-0133
> al...@netvel.net  ||
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Alvin Starr   ||   land:  (647)478-6285
> Netvel Inc.   ||   Cell:  (416)806-0133al...@netvel.net   
>||
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS performance for big files...

2020-08-18 Thread Yaniv Kaul
On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes 
wrote:

> Hi friends...
>
> I have a 2-nodes GlusterFS, with has the follow configuration:
> gluster vol info
>
> Volume Name: VMS
> Type: Replicate
> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: server02:/DATA/vms
> Brick2: server01:/DATA/vms
> Options Reconfigured:
> performance.read-ahead: off
> performance.io-cache: on
> performance.cache-refresh-timeout: 1
> performance.cache-size: 1073741824
> performance.io-thread-count: 64
> performance.write-behind-window-size: 64MB
> cluster.granular-entry-heal: enable
> cluster.self-heal-daemon: enable
> performance.client-io-threads: on
> cluster.data-self-heal-algorithm: full
> cluster.favorite-child-policy: mtime
> network.ping-timeout: 2
> cluster.quorum-count: 1
> cluster.quorum-reads: false
> cluster.heal-timeout: 20
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
>
> HDDs are SSD and SAS
> Network connections between the servers are dedicated 1GB (no switch!).
>

You can't get good performance on 1Gb.

> Files are 500G 200G 200G 250G 200G 100G size each.
>
> Performance so far so good is ok...
>

What's your workload? Read? Write? sequential? random? many files?
With more bricks and nodes, you should probably use sharding.

What are your expectations, btw?
Y.


> Any other advice which could point me, let me know!
>
> Thanks
>
>
>
> ---
> Gilberto Nunes Ferreira
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster linear scale-out performance

2020-07-21 Thread Yaniv Kaul
On Tue, 21 Jul 2020, 21:43 Qing Wang  wrote:

> Hi Yaniv,
>
> Thanks for the quick response. I forget to mention I am testing the
> writing performance, not reading. In this case, would the client cache hit
> rate still be a big issue?
>

It's not hitting the storage directly. Since it's also single threaded, it
may also not saturate it. I highly recommend testing properly.
Y.


> I'll use fio to run my test once again, thanks for the suggestion.
>
> Thanks,
> Qing
>
> On Tue, Jul 21, 2020 at 2:38 PM Yaniv Kaul  wrote:
>
>>
>>
>> On Tue, 21 Jul 2020, 21:30 Qing Wang  wrote:
>>
>>> Hi,
>>>
>>> I am trying to test Gluster linear scale-out performance by adding more
>>> storage server/bricks, and measure the storage I/O performance. To vary the
>>> storage server number, I create several "stripe" volumes that contain 2
>>> brick servers, 3 brick servers, 4 brick servers, and so on. On gluster
>>> client side, I used "dd if=/dev/zero of=/mnt/glusterfs/dns_test_data_26g
>>> bs=1M count=26000" to create 26G data (or larger size), and those data will
>>> be distributed to the corresponding gluster servers (each has gluster brick
>>> on it) and "dd" returns the final I/O throughput. The Internet is 40G
>>> infiniband, although I didn't do any specific configurations to use
>>> advanced features.
>>>
>>
>> Your dd command is inaccurate, as it'll hit the client cache. It is also
>> single threaded. I suggest switching to fio.
>> Y.
>>
>>
>>> What confuses me is that the storage I/O seems not to relate to the
>>> number of storage nodes, but Gluster documents said it should be linear
>>> scaling. For example, when "write-behind" is on, and when Infiniband "jumbo
>>> frame" (connected mode) is on, I can get ~800 MB/sec reported by "dd", no
>>> matter I have 2 brick servers or 8 brick servers -- for 2 server case, each
>>> server can have ~400 MB/sec; for 4 server case, each server can have
>>> ~200MB/sec. That said, each server I/O does aggregate to the final storage
>>> I/O (800 MB/sec), but this is not "linear scale-out".
>>>
>>> Can somebody help me to understand why this is the case? I certainly can
>>> have some misunderstanding/misconfiguration here. Please correct me if I
>>> do, thanks!
>>>
>>> Best,
>>> Qing
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster linear scale-out performance

2020-07-21 Thread Yaniv Kaul
On Tue, 21 Jul 2020, 21:30 Qing Wang  wrote:

> Hi,
>
> I am trying to test Gluster linear scale-out performance by adding more
> storage server/bricks, and measure the storage I/O performance. To vary the
> storage server number, I create several "stripe" volumes that contain 2
> brick servers, 3 brick servers, 4 brick servers, and so on. On gluster
> client side, I used "dd if=/dev/zero of=/mnt/glusterfs/dns_test_data_26g
> bs=1M count=26000" to create 26G data (or larger size), and those data will
> be distributed to the corresponding gluster servers (each has gluster brick
> on it) and "dd" returns the final I/O throughput. The Internet is 40G
> infiniband, although I didn't do any specific configurations to use
> advanced features.
>

Your dd command is inaccurate, as it'll hit the client cache. It is also
single threaded. I suggest switching to fio.
Y.


> What confuses me is that the storage I/O seems not to relate to the number
> of storage nodes, but Gluster documents said it should be linear scaling.
> For example, when "write-behind" is on, and when Infiniband "jumbo frame"
> (connected mode) is on, I can get ~800 MB/sec reported by "dd", no matter I
> have 2 brick servers or 8 brick servers -- for 2 server case, each server
> can have ~400 MB/sec; for 4 server case, each server can have ~200MB/sec.
> That said, each server I/O does aggregate to the final storage I/O (800
> MB/sec), but this is not "linear scale-out".
>
> Can somebody help me to understand why this is the case? I certainly can
> have some misunderstanding/misconfiguration here. Please correct me if I
> do, thanks!
>
> Best,
> Qing
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Readdirp (ls -l) Performance Improvement

2020-05-27 Thread Yaniv Kaul
On Wed, May 27, 2020 at 10:35 AM RAFI KC  wrote:

> Hi Felix,
>
> Thanks for your mail. I will test it more to make sure that it doesn't
> break anything. Also I have added a configuration key for easier switching
> to the older code in case if there is any problem. If you can help me in
> any manner in testing or performance numbers, please let me know.
>

A scratch build allowing the community to test this might be beneficial.
Y.

>
> Regards
>
> Rafi KC
> On 27/05/20 12:37 pm, Felix Kölzow wrote:
>
> Dear Rafi,
>
>
> thanks for your effort. I think this is of great interest of many gluster
> users. Thus, I would really encourage you to
>
> test and to further improve this feature. Maybe it is beneficial to create
> a certain guideline which things should be tested
>
> to make this feature really ready for productive use.
>
>
> Thanks in advance.
>
> Felix
> On 27/05/2020 07:56, RAFI KC wrote:
>
> Hi All,
>
> I have been working on POC to improve readdirp performance improvement. At
> the end of the experiment, The results are showing promising result in
> performance, overall there is a 104% improvement for full filesystem crawl
> compared to the existing solution. Here is the short test numbers. The
> tests were carried out in 16*3 setup with 1.5 Million dentries (Both files
> and dir). The system also contains some empty directories. *In the result
> the proposed solution is 287% faster than the plane volume and 104% faster
> than the parallel-readdir based solution.*
>
>
> Configuration
>
> Plain volume
>
> Parallel-readdir
>
> Proposed Solution
>
> FS Crawl Time in Seconds
>
> 16497.523
>
> 8717.872
>
> 4261.401
>
> In short, the basic idea behind the proposal is the efficient managing of
> readdir buffer in gluster along with prefetching the dentries for
> intelligent switch-over to the next buffer. The detailed problem
> description, deign description and results are available in the doc.
> https://docs.google.com/document/d/10z4T5Sd_-wCFrmDrzyQtlWOGLang1_g17wO8VUxSiJ8/edit
>
>
> If anybody can help with the testing on a different kind of workloads, I
> would be very happy to assist. If wanted to test the patch and run a
> performance test on your setup, I could help with back-porting the patch to
> the version of your choice.
>
>
> https://review.gluster.org/24469
>
> https://review.gluster.org/24470
>
>
> Regards
>
> Rafi KC
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] MTU 9000 question

2020-05-06 Thread Yaniv Kaul
On Wed, May 6, 2020 at 5:10 PM Erik Jacobson  wrote:

> It is inconvenient for us to use MTU 9K for our gluster servers for
> various reasons. We typically have bonded 10G interfaces.
>
> We use distribute/replicate and gluster NFS for compute nodes.
>
> My understanding is the negative to using 1500 MTU is just less
> efficient use of the network. Are there other concerns? We don't
> currently have network saturation problems.
>
> We are trying to make a decision on if we need to do a bunch of extra
> work to switch to 9K MTU and if it is worth the benefit.
>
> Does the community have any suggestions?
>

No worries, 1500 MTU is fine.
Y.

>
> Erik
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume parameters listed more than once

2020-04-20 Thread Yaniv Kaul
On Mon, Apr 20, 2020 at 5:38 PM Dmitry Antipov  wrote:

> # gluster volume info
>
> Volume Name: TEST0
> Type: Distributed-Replicate
> Volume ID: ca63095f-58dd-4ba8-82d6-7149a58c1423
> Status: Created
> Snapshot Count: 0
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: HOST-001:/mnt/SSD-0003
> Brick2: HOST-001:/mnt/SSD-0004
> Brick3: HOST-002:/mnt/SSD-0003
> Brick4: HOST-002:/mnt/SSD-0004
> Brick5: HOST-002:/mnt/SSD-0005
> Brick6: HOST-003:/mnt/SSD-0002
> Brick7: HOST-003:/mnt/SSD-0003
> Brick8: HOST-003:/mnt/SSD-0004
> Brick9: HOST-004:/mnt/SSD-0002
> Options Reconfigured:
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
> # gluster volume get TEST0 all | grep performance.cache-size
> performance.cache-size  32MB
> performance.cache-size  128MB
>

I suspect these are for different translators and regretfully have the same
name...
performance/io-cache and performance/quick-read.


>
> ???
>
> # gluster volume get TEST0 all | grep features.ctime
> features.ctime  on
> features.ctime  on
>

Same - storage/posix and features/utime translators.

Worth filing an issue about it, as it is indeed somewhat confusing.
Y.


> ???
>
> Dmitry
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Red Hat Bugzilla closed for GlusterFS bugs?

2020-04-03 Thread Yaniv Kaul
We've moved reporting issues to Github - please use
https://github.com/gluster/glusterfs/issues
Y.

On Fri, Apr 3, 2020 at 11:55 AM Dmitry Antipov  wrote:

> Hmm
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS ==>
>
> Sorry, entering a bug into the product GlusterFS has been disabled.
>
> ???
>
> Dmitry
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 回复: Re: not support so called “structured data”

2020-04-02 Thread Yaniv Kaul
On Thu, Apr 2, 2020 at 9:36 AM sz_cui...@163.com  wrote:

> thank you for your answer.
>
> I don't like the document of the project.In fact,many open source projects
> do not having a good document system.
> For exmaple,the version of doc does not match with the feature and some
> topic in the doc is outdated.
>

Contribution to update the documentation is, of course, welcome.
Y.


> At this point,commercial software does more better.
>
> --
> sz_cui...@163.com
>
>
> *发件人:* Strahil Nikolov 
> *发送时间:* 2020-04-02 12:27
> *收件人:* gluster-users ; sz_cui...@163.com
> *主题:* Re: [Gluster-users] not support so called “structured data”
> On April 2, 2020 5:24:39 AM GMT+03:00, "sz_cui...@163.com" <
> sz_cui...@163.com> wrote:
> >Document point out:
> >Gluster does not support so called “structured data”, meaning live, SQL
> >databases. Of course, using Gluster to backup and restore the database
> >would be fine.
> >
> >What? Not,support!
> >I had a test to run Oracle database on KVM/Ovirt/Gluster,it works
> >well,in fact.
> >
> >But why docs says not support ? It measn not suggest or not to use ?
> >
> >
> >
> >
> >
> >sz_cui...@163.com
>
> I don't know why this is written ,  but when I checked this doc:
>
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/console_installation_guide/add_database_server_to_rhgs-c
>
>
> it seems pretty legid workload (no matter postgres, mysql, mariadb,
> oracle,hana,etc)   .
>
> The only thing that comes to my mind is that usually DBs are quite
> valuable and thus a 'replica 3' volume or a 'replica 3 arbiter 1' volume
> should be used and a different set of options are needed  (compared  to
> other workloads).
>
> Best Regards,
> Strahil Nikolov
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] just discovered that OpenShift/OKD dropped GlusterFS storage support...

2020-03-18 Thread Yaniv Kaul
On Wed, Mar 18, 2020 at 6:50 PM Arman Khalatyan  wrote:

> Hello everybody,
> any reason why  OpenShift/OKD dropped GlusterFS storage support?
> now in the documentation
>
> https://docs.openshift.com/container-platform/4.2/migration/migrating_3_4/planning-migration-3-to-4.html#migration-considerations
>
>  is only:
> UNSUPPORTED PERSISTENT STORAGE OPTIONS
> Support for the following persistent storage options from OpenShift
> Container Platform 3.11 has changed in OpenShift Container Platform 4.2:
>
> GlusterFS is no longer supported.
>
> CephFS is no longer supported.
>
> Ceph RBD is no longer supported.
>
> iSCSI is now Technology Preview.
>
> migrations to 4.x becoming really painfull
>

Migration should not be that difficult, see[1].
Ceph can be used using Rook and Ceph-CSI.
Y.

[1] https://blog.openshift.com/migrating-your-applications-to-openshift-4/

>
> thanks,
> Arman.
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] No gluster NFS server on localhost

2020-01-06 Thread Yaniv Kaul
On Mon, Jan 6, 2020 at 2:27 PM Xie Changlong  wrote:

> Hi Birgit
>
>
>  Gnfs is not build in glusterfs 7.0 by default, you can build the
> source code with: ./autogen.sh; ./configure  --enable-gnfs
>
> to enable it.
>

I'm not sure that explains it.
Gluster 7 was released in November[1].
My patch for this [2] was merged only to Master, late December.
It was never backported to 7.x.
Y.

[1] https://www.gluster.org/announcing-gluster-7-0/
[2] https://review.gluster.org/#/c/glusterfs/+/23799/


>
> Thanks
>
>  Xie
>
> 在 2020/1/6 19:28, DUCARROZ Birgit 写道:
> > Hi all,
> >
> > I installed glusterfs 7.0 and wanted to ask if gluster NFS is still
> > available or if it is depreciated.
> >
> > I did several tests with NFS ganesha, but the results were not as
> > expected.
> >
> > On my server:
> >
> > Firewall is turned off to test.
> > There is actually no nfs logfile in /var/log/glusterfs
> > and no other NFS service installed.
> >
> > netstat -anp | grep 2049 --> no opened port.
> > gluster volume set vol-users nfs.disable off --> volume set: success
> >
> >
> > service rpcbind status
> > ● rpcbind.service - RPC bind portmap service
> >Loaded: loaded (/lib/systemd/system/rpcbind.service; indirect;
> > vendor preset: enabled)
> >   Drop-In: /run/systemd/generator/rpcbind.service.d
> >└─50-rpcbind-$portmap.conf
> >Active: active (running) since Fri 2020-01-03 17:36:31 CET; 2 days ago
> >  Main PID: 18849 (rpcbind)
> > Tasks: 1
> >Memory: 116.0K
> >   CPU: 281ms
> >CGroup: /system.slice/rpcbind.service
> >└─18849 /sbin/rpcbind -f -w
> >
> >
> > netstat -anp | grep rpc
> > tcp0  0 0.0.0.0:111 0.0.0.0:* LISTEN
> > 18849/rpcbind
> > tcp6   0  0 :::111  :::* LISTEN 18849/rpcbind
> > udp0  0 0.0.0.0:111 0.0.0.0:* 18849/rpcbind
> > udp0  0 0.0.0.0:793 0.0.0.0:* 18849/rpcbind
> > udp6   0  0 :::111  :::* 18849/rpcbind
> > udp6   0  0 :::793  :::* 18849/rpcbind
> > unix  2  [ ACC ] STREAM LISTENING 140251/init
> >   /run/rpcbind.sock
> > unix  3  [ ] STREAM CONNECTED 11087142 18849/rpcbind
> >
> >
> > Why is gluster NFS not running?
> > Thank you for any suggestions.
> > Kind regards,
> > Birgit
> > 
> >
> > Community Meeting Calendar:
> >
> > APAC Schedule -
> > Every 2nd and 4th Tuesday at 11:30 AM IST
> > Bridge: https://bluejeans.com/441850968
> >
> > NA/EMEA Schedule -
> > Every 1st and 3rd Tuesday at 01:00 PM EDT
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Modifying gluster's logging mechanism

2019-11-22 Thread Yaniv Kaul
On Fri, Nov 22, 2019 at 11:45 AM Barak Sason Rofman 
wrote:

> Thank you for your input Atin and Xie Changlong.
>
> Regarding log ordering - my initial thought was to do it offline using a
> dedicated too. Should be straight forward, as the logs have time stamp
> composed of seconds and microseconds, so ordering them using this value is
> definitely possible.
> This is actually one of the main reasons I wanted to bring this up for
> discussion - will it be fine with the community to run a dedicated tool to
> reorder the logs offline?
> Reordering the logs offline will allow us to gain the most performance
> improvement, as keeping the logs order online will have some cost (probably
> through stronger synchronization).
> Moreover, we can take log viewing one step further and maybe create some
> GUI system (JAVA based?) to view and handle logs (e.g. one window to show
> the combined order logs, other windows to show logs per thread etc').
>

If you need an external tool (please not Java - let's not add another
language to the project), you might as well move to binary logging.
I believe we need to be able to do this sort using Linux command line tools
only.

>
> Regarding the test method - my initial testing was done by removing all
> logging from regression. Modifying the method "skip_logging" to return
> 'true' in all cases seems to remove most of the logs (though not all, "to
> be on the safe side", really removing all logging related methods is
> probably even better).
>

This is not a fair comparison:
1. The regression tests are running with debug log
2. Not logging at all != replacing the logging framework - the new one will
have its own overhead as well.
3. I believe there's a lot of code that is not real life scenarios there -
such as dumping a lot of data there (which causes a lot of calls to
inode_find() and friends - for example).
Y.

As regression tests use mostly single-node tests, some additional testing
> was needed. I've written a couple of very basic scripts to create large
> number of files / big files, read / write to / from them, move them around
> and perform some other basic functionality.
> I'd actually be glad to test this in some 'real world' use cases - if you
> have specific use cases that you use frequently, we can model them and
> benchmark against - this will likely offer an even more accurate benchmark.
>
> On Fri, Nov 22, 2019 at 7:27 AM Xie Changlong  wrote:
>
> >
> > 在 2019/11/21 21:04, Barak Sason Rofman 写道:
> >
> > I see two design / implementation problems with that mechanism:
> >
> >1.
> >
> >The mutex that guards the log file is likely under constant contention


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus

2019-11-21 Thread Yaniv Kaul
On Fri, 22 Nov 2019, 5:03 Xie Changlong  wrote:

>
> 在 2019/11/22 5:14, Kaleb Keithley 写道:
>
> I personally wouldn't call three years ago — when we started to deprecate
> it, in glusterfs-3.9 — a recent change.
>
> As a community the decision was made to move to NFS-Ganesha as the
> preferred NFS solution, but it was agreed to keep the old code in the tree
> for those who wanted it. There have been plans to drop it from the
> community packages for most of those three years, but we didn't follow
> through across the board until fairly recently. Perhaps the most telling
> piece of data is that it's been gone from the packages in the CentOS
> Storage SIG in glusterfs-4.0, -4.1, -5, -6, and -7 with no complaints ever,
> that I can recall.
>
> Ganesha is a preferable solution because it supports NFSv4, NFSv4.1,
> NFSv4.2, and pNFS, in addition to legacy NFSv3. More importantly, it is
> actively developed, maintained, and supported, both in the community and
> commercially. There are several vendors selling it, or support for it; and
> there are community packages for it for all the same distributions that
> Gluster packages are available for.
>
> Out in the world, the default these days is NFSv4. Specifically v4.2 or
> v4.1 depending on how recent your linux kernel is. In the linux kernel,
> client mounts start negotiating for v4.2 and work down to v4.1, v4.0, and
> only as a last resort v3. NFSv3 client support in the linux kernel largely
> exists at this point only because of the large number of legacy servers
> still running that can't do anything higher than v3. The linux NFS
> developers would drop the v3 support in a heartbeat if they could.
>
> IMO, providing it, and calling it maintained, only encourages people to
> keep using a dead end solution. Anyone in favor of bringing back NFSv2,
> SSHv1, or X10R4? No? I didn't think so.
>
> The recent issue[1] where someone built gnfs in glusterfs-7.0 on CentOS7
> strongly suggests to me that gnfs is not actually working well. Three years
> of no maintenance seems to have taken its toll.
>
> Other people are more than welcome to build their own packages from the
> src.rpms and/or tarballs that are available from gluster — and support
> them. It's still in the source and there are no plans to remove it. (Unlike
> most of the other deprecated features which were recently removed in
> glusterfs-7.)
>
>
>
> [1] https://github.com/gluster/glusterfs/issues/764
>
>
> It seems https://bugzilla.redhat.com/show_bug.cgi?id=1727248 has resolved
> this issue.
>
> Here i'll talk about something from commerical company view.  For security
> reasons most government procurement projects only allow universal storage
> protocol(nfs, cifs etc) what means fuse will be excluded.  Consindering
> performance requirements, the only option is nfs.
>

I don't see how NFSv3 is more secure than newer NFS versions.

Nfsv4 is stateful protocol, but i see on performance improvement. Trust me,
> nfs-ganesha(v3, v4) shows  ~30% performance degradation versus gnfs  for
> either small or big files r/w in practice.  Further, many customers prefer
> nfs client than cifs in windows, because the poor cifs performance, AFAIK
> nfs-ganesha is not going well with windows nfs client.
>

Interesting - we've seen far better performance with Ganesha v4.1 vs. gnfs.
Would be great if you could share the details.
Same for NFS Ganesha and Windows support.

It's difficult to counterpart without referring to specific issues. It's
eveb to harder to fix them ;-)


Gnfs is stable enough, we have about ~1000 servers, 4~24 servers for a
> gluster cluster, about ~2000 nfs clients, all works fine till the last two
> years expect some memleak issue.
>

Nice! Would be great for the Gluster community to learn more about the use
case!
Y.

> Thanks
>
> -Xie
>
> On Thu, Nov 21, 2019 at 5:31 AM Amar Tumballi  wrote:
>
>> Hi All,
>>
>> As per the discussion on https://review.gluster.org/23645, recently we
>> changed the status of gNFS (gluster's native NFSv3 support) feature to
>> 'Depricated / Orphan' state. (ref:
>> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
>> With this email, I am proposing to change the status again to 'Odd Fixes'
>> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>>
>> TL;DR;
>>
>> I understand the current maintainers are not able to focus on maintaining
>> it as the focus of the project, as earlier described, is keeping
>> NFS-Ganesha based integration with glusterfs. But, I am volunteering along
>> with Xie Changlong (currently working at Chinamobile), to keep the feature
>> running as it used to in previous versions. Hence the status of 'Odd
>> Fixes'.
>>
>> Before sending the patch to make these changes, I am proposing it here
>> now, as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
>> heard from some users that it was working great for them with earlier
>> releases, as all they wanted was NFS v3 support, and not much of 

Re: [Gluster-users] [Gluster-devel] Proposal to change gNFS status

2019-11-21 Thread Yaniv Kaul
On Thu, Nov 21, 2019 at 12:31 PM Amar Tumballi  wrote:

> Hi All,
>
> As per the discussion on https://review.gluster.org/23645, recently we
> changed the status of gNFS (gluster's native NFSv3 support) feature to
> 'Depricated / Orphan' state. (ref:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
> With this email, I am proposing to change the status again to 'Odd Fixes'
> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> TL;DR;
>
> I understand the current maintainers are not able to focus on maintaining
> it as the focus of the project, as earlier described, is keeping
> NFS-Ganesha based integration with glusterfs. But, I am volunteering along
> with Xie Changlong (currently working at Chinamobile), to keep the feature
> running as it used to in previous versions. Hence the status of 'Odd
> Fixes'.
>

As long as there are volunteers to keep maintaining it, I think it's a
great idea to have it.
I personally do not see the value, as I believe it's somewhat of a
redundant effort in several ways (it's not helping stabilizing having more
than one implementation, it doesn't help in improving it from features
perspective - pNFS, NFS v4.2, IPv6, performance... and it 'costs' more in
CI and testing), but indeed if there's interest from both the users and the
development community - then sure!


> Before sending the patch to make these changes, I am proposing it here
> now, as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
> heard from some users that it was working great for them with earlier
> releases, as all they wanted was NFS v3 support, and not much of features
> from gNFS. Also note that, even though the packages are not built, none of
> the regression tests using gNFS are stopped with latest master, so it is
> working same from at least last 2 years.
>
> I request the package maintainers to please add '--with gnfs' (or
> --enable-gnfs) back to their release script through this email, so those
> users wanting to use gNFS happily can continue to use it. Also points to
> users/admins is that, the status is 'Odd Fixes', so don't expect any
> 'enhancements' on the features provided by gNFS.
>

'Odd fixes' sounds odd to me. Better come with a better terminology. ;-)
Y.

>
> Happy to hear feedback, if any.
>
> Regards,
> Amar
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Prioritise local bricks for IO?

2019-04-02 Thread Yaniv Kaul
On Tue, Apr 2, 2019 at 6:37 PM Nux!  wrote:

> Ok, cool, thanks. So.. no go.
>
> Any other ideas on how to accomplish task then?
>

While not a solution, I believe
https://review.gluster.org/#/c/glusterfs/+/21333/ - read selection based on
latency, is an interesting path towards this.
(Of course, you'd need later also add write...)
Y.


> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "Nithya Balachandran" 
> > To: "Poornima Gurusiddaiah" 
> > Cc: "Nux!" , "gluster-users" ,
> "Gluster Devel" 
> > Sent: Thursday, 28 March, 2019 09:38:16
> > Subject: Re: [Gluster-users] Prioritise local bricks for IO?
>
> > On Wed, 27 Mar 2019 at 20:27, Poornima Gurusiddaiah  >
> > wrote:
> >
> >> This feature is not under active development as it was not used widely.
> >> AFAIK its not supported feature.
> >> +Nithya +Raghavendra for further clarifications.
> >>
> >
> > This is not actively supported  - there has been no work done on this
> > feature for a long time.
> >
> > Regards,
> > Nithya
> >
> >>
> >> Regards,
> >> Poornima
> >>
> >> On Wed, Mar 27, 2019 at 12:33 PM Lucian  wrote:
> >>
> >>> Oh, that's just what the doctor ordered!
> >>> Hope it works, thanks
> >>>
> >>> On 27 March 2019 03:15:57 GMT, Vlad Kopylov 
> wrote:
> 
>  I don't remember if it still in works
>  NUFA
> 
> 
> https://github.com/gluster/glusterfs-specs/blob/master/done/Features/nufa.md
> 
>  v
> 
>  On Tue, Mar 26, 2019 at 7:27 AM Nux!  wrote:
> 
> > Hello,
> >
> > I'm trying to set up a distributed backup storage (no replicas), but
> > I'd like to prioritise the local bricks for any IO done on the
> volume.
> > This will be a backup stor, so in other words, I'd like the files to
> be
> > written locally if there is space, so as to save the NICs for other
> traffic.
> >
> > Anyone knows how this might be achievable, if at all?
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> 
> >>> --
> >>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> >>> ___
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
> >>
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Maintainer meeting minutes : 1st Oct, 2018

2018-10-03 Thread Yaniv Kaul
On Tue, Oct 2, 2018, 8:21 AM Amar Tumballi  wrote:

> BJ Link
>
>- Bridge: https://bluejeans.com/217609845
>- Watch: https://bluejeans.com/s/eNNfZ
>
> Attendance
>
>- Jonathan (loadtheacc), Vijay Baskar, Amar Tumballi, Deepshikha,
>Raghavendra Bhat, Shyam, Kaleb, Akarsha Rai,
>
> Agenda
>
>-
>
>AI from previous week
>- Status check on bi-weekly Bugzilla/Github issues tracker ? Any
>   progress?
>   - Glusto Status and update?
>-
>
>[added by sankarshan; possibly requires Glusto and Glusto-test
>maintainers] Status of Test Automation
>-
>   - Glusto and tests updates:
>- Glusto focus in Py3 port, alpha/beta branch for py3 possibly by next
>   week, around 75% complete (ball park)
>  - client and server need to be the same version of py (2 or 3)
>  - Documentation and minor tweaks/polishing work
>   - New feature in py3 would be the log format, would be configurable
>   - Couple of issues:
>  - scandir needing gcc, fixed upstream for this issue
>  - cartplex
>  
> 
>   - Glusto tests:
>  - Priority is fixing the test cases, that are failing to get
>  Glusto passing upstream
>  - When running the tests as a suite, some tests are failing due
>  to cleanup not correct in all test cases
>  - Working on the above on a per component basis
>  - Next up is py2->3, currently blocked on Glusto for some parts,
>  certain other parts like IO modules are ported to py3
>  - Post Glusto, porting of tests would take 2 months
>  - Next: GD2 libraries, target Dec. to complete the work
>  - Testing Glusto against release-5:
> - Can we run against RC0 for release-5?
> - Requires a setup, can we use the same setup used by the
> Glusto-tests team?
>
> I'd expect it to move to Ansible based setup.
Y.


>-
>   -
>  -
> - AI: Glusto-tests team to run against release-5 and help
> provide results to the lists
>  - Some components are being deprecated, so priorotization of
>  correcting tests per component can leverage the same
> - AI: Vijay to sync with Amar on the list
>  - Can we port Glusto and tests in parallel?
> - Interface to Gluto remains the same, and hence the port can
> start in parallel, but cannot run till Glusto port is ready, but 
> first cut
> Glusto should be available in week, so work can start.
>  -
>
>Release 5:
>- Py3 patches need to be accomodated
>  - Needs backports from master
>   - noatime needs to be merged
>   - gbench initial runs complete (using CentOS packages)
>   - glusto, upgrade tests, release notes pending
>   - Regression nightly issue handled on Friday, need to check health
>   - Mid-Oct release looks feasible as of now, with an RC1 by end of
>   this week or Monday next week
>-
>
>GD2 release:
>- Can there be a production release, which just supports volume and
>   peer life-cycle, but no features?
>   - This may get us more users trying out, and giving feedback.
>   - Not every users are using all the glusterd features like geo-rep,
>   snapshot or quota.
>   - Can we take it 1 step at a time, and treat it as a new project,
>   and not a replacement ?
>   - Ref: gluster-user email on gd2 status
>   
> 
>   .
>   - AI: Take the discussion to mailing list
>   - [Vijay] If we make it as separate releases, will it impact
>   release of GCS?
>  - [Amar] I don’t think so, it would be more milestones for the
>  project, instead of just 1 big goal.
>   -
>
>Distributed-regression testing:
>- A few of the test is taking more time then expected.
>   - One of them is tests/bugs/index/bug-1559004-EMLINK-handling.t is
>   taking around 14 mins which adds up the overall time of running test 
> suite
>   (not just in distributed environment but also in centos7-regression).
>   - Author of test or maintainer needs to look at it.
>-
>
>New Peer/Maintainers proposals ?
>-
>
>Round Table
>- [Kaleb] CVE in golang packages, so we need to update the bundle of
>   GD2.
>
>
> --
> Amar Tumballi (amarts)
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users