Re: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features

2022-10-17 Thread Yaniv Kaul
On Mon, Oct 17, 2022 at 8:41 AM Xavi Hernandez wrote: > On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote: > >> Here is my honest take on this one. >> >> On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya >> wrote: >> >>> It is time to evaluate the fulfillment of our committed >>>

Re: [Gluster-devel] New logging interface

2022-03-24 Thread Yaniv Kaul
On Thu, 24 Mar 2022, 22:16 Xavi Hernandez wrote: > Hi Strahil, > > On Thu, Mar 24, 2022 at 8:26 PM Strahil Nikolov > wrote: > >> Hey Xavi, >> >> Did anyone measure performance behavior before and after the changes? >> > > I haven't tested performance for this change, but I don't expect any >

Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 10.1

2022-02-15 Thread Yaniv Kaul
Responding to the original ask, but in a different way - we have been experimenting a bit with building Gluster via Containers. We have a main branch built for EL7 @ https://github.com/gluster/Gluster-Builds/actions/runs/1847522621 - can someone give it a spin, give some feedback if it's useful,

Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 10.1

2022-02-13 Thread Yaniv Kaul
ity of RH or even the CentOS community. > > On Sun, Feb 13, 2022 at 12:05 PM Yaniv Kaul wrote: > >> >> >> On Sun, 13 Feb 2022, 10:40 Strahil Nikolov wrote: >> >>> Not really. >>> Debian take care for packaging on Debian, Ubuntu for their d

Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 10.1

2022-02-13 Thread Yaniv Kaul
On Sun, 13 Feb 2022, 10:40 Strahil Nikolov wrote: > Not really. > Debian take care for packaging on Debian, Ubuntu for their debs, OpenSuSE > for their rpms, CentOS is part of RedHat and they can decide for it. > CentOS is not part of Red Hat - we support it, as other companies, organizations

Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 10.1

2022-02-13 Thread Yaniv Kaul
sues/2979 > > /Z > > On Sun, Feb 13, 2022 at 10:08 AM Yaniv Kaul wrote: > >> >> >> On Sun, Feb 13, 2022 at 9:58 AM Zakhar Kirpichenko >> wrote: >> >>> > Maintenance updates != new feature releases (and never has). >>> >>> Th

Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 10.1

2022-02-13 Thread Yaniv Kaul
On Sun, Feb 13, 2022 at 9:58 AM Zakhar Kirpichenko wrote: > > Maintenance updates != new feature releases (and never has). > > Thanks for this, but what's your point exactly? Feature updates for CentOS > 7 ended in August 2020, 1.5 years ago. This did not affect the release of > 8.x updates, or

Re: [Gluster-devel] Problem during reproducing smallfile experiment on Gluster 10

2022-01-20 Thread Yaniv Kaul
On Thu, Jan 20, 2022 at 8:54 AM 박현승 wrote: > Dear Gluster developers, > > > > This is Hyunseung Park at Gluesys, South Korea. > > > > We are trying to replicate the test in > https://github.com/gluster/glusterfs/issues/2771 but to no avail. > > In our experiments, Gluster version 10

Re: [Gluster-devel] [erik.jacob...@hpe.com: [Gluster-users] gluster forcing IPV6 on our IPV4 servers, glusterd fails (was gluster update question regarding new DNS resolution requirement)]

2021-09-21 Thread Yaniv Kaul
Perhaps part of the problem is that is that if we DISABLE the IPv6 support in ./configure time, I don't expect to see lines such as (from af_inet_client_get_remote_sockaddr() function): if (inet_pton(AF_INET6, remote_host, )) { sockaddr->sa_family = AF_INET6; } /* TODO:

[Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2021-04-28 Thread Yaniv Kaul
2 new coverity issues after yesterday's merge. Y. -- Forwarded message - From: Date: Wed, 28 Apr 2021, 8:57 Subject: New Defects reported by Coverity Scan for gluster/glusterfs To: Hi, Please find the latest report on new defect(s) introduced to gluster/glusterfs found with

Re: [Gluster-devel] Automatic clang-format for GitHub PRs

2021-02-11 Thread Yaniv Kaul
On Thu, Feb 11, 2021 at 5:54 PM Amar Tumballi wrote: > > > On Thu, 11 Feb, 2021, 9:19 pm Xavi Hernandez, > wrote: > >> On Wed, Feb 10, 2021 at 1:33 PM Amar Tumballi wrote: >> >>> >>> >>> On Wed, Feb 10, 2021 at 3:29 PM Xavi Hernandez >>> wrote: >>> Hi all, I'm wondering if

Re: [Gluster-devel] Update on georep failure

2021-02-02 Thread Yaniv Kaul
On Tue, Feb 2, 2021 at 8:14 PM Michael Scherer wrote: > Hi, > > so we finally found the cause of the georep failure, after several days > of work from Deepshika and I. > > Short story: > > > side effect of adding libtirpc-devel on EL 7: >

Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-16 Thread Yaniv Kaul
On Mon, Nov 16, 2020 at 10:26 AM Ravishankar N wrote: > > On 15/11/20 8:24 pm, Strahil Nikolov wrote: > > Hello All, > > > > did anyone get a chance to look at > https://github.com/gluster/glusterfs/issues/1778 ? > > A look at > >

Re: [Gluster-devel] [Action required] Jobs running under centos ci

2020-07-23 Thread Yaniv Kaul
On Thu, Jul 23, 2020 at 1:04 PM Deepshikha Khandelwal wrote: > > FYI, we have the list of jobs running under > https://ci.centos.org/view/Gluster/ > 1. Delete those that did not pass nor fail in the last year. No one is using them. 2. I think you can also delete those that did not pass in the

Re: [Gluster-devel] Removing problematic language in geo-replication

2020-07-22 Thread Yaniv Kaul
On Wed, Jul 22, 2020 at 12:37 PM Shwetha Acharya wrote: > > 1. Can I replace master:slave with primary:secondary everywhere in the >> code and the CLI? Are there any suggestions for more appropriate >> terminology? >> > Primary and Secondary sounds good. > ACK, though others use leader and

Re: [Gluster-devel] Introducing me, questions on general improvements in gluster re. latency and throughput

2020-06-15 Thread Yaniv Kaul
Welcome first of all, glad to see interest in improving Gluster performance. I believe there's a huge potential for improvements in different areas you can look into. Certainly zero-copy (or just eliminating extra copies we have today in the code) would be most welcome. We seem to have some lock

[Gluster-devel] Fwd: [netdata/netdata] Gluster monitoring (#4824)

2020-02-22 Thread Yaniv Kaul
If anyone is interested in picking this up. -- Forwarded message - From: Chris Akritidis Date: Sat, 22 Feb 2020, 19:30 Subject: Re: [netdata/netdata] Gluster monitoring (#4824) To: netdata/netdata Cc: Yaniv Kaul , Comment A lot of interest here, but no sponsor. Can someone

[Gluster-devel] Do we still support the '"GlusterFS 3.3" RPC client?

2020-01-22 Thread Yaniv Kaul
Or can we remove this code? TIA, Y. ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge:

Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-13 Thread Yaniv Kaul
dez wrote: > On Thu, Jan 9, 2020 at 11:11 AM Yaniv Kaul wrote: > >> >> >> On Thu, Jan 9, 2020 at 11:35 AM Xavi Hernandez >> wrote: >> >>> On Thu, Jan 9, 2020 at 10:22 AM Amar Tumballi wrote: >>> >>>> >>>> >>>&

Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-09 Thread Yaniv Kaul
gt; >>>> >>>> >>>> On Thu, Jan 9, 2020 at 1:38 PM Xavi Hernandez >>>> wrote: >>>> >>>>> On Sun, Dec 22, 2019 at 4:56 PM Yaniv Kaul wrote: >>>>> >>>>>> I could not find a relevant use for them

[Gluster-devel] Detect is quota enabled or disabled - in DHT code?

2020-01-08 Thread Yaniv Kaul
I'd like to add to the DHT conf something like conf->quota_enabled, so if it's not, I can skip quite a bit of work done today in DHT. I'm just unsure where, in the init and reconfigure of DHT, I can detect and introduce this. Any ideas? TIA, Y. ___

Re: [Gluster-devel] Infra & Tests: Few requests

2020-01-03 Thread Yaniv Kaul
On Fri, Jan 3, 2020 at 4:07 PM Amar Tumballi wrote: > Hi Team, > > First thing first - Happy 2020 !! Hope this year will be great for all of > us :-) > > Few requests to begin the new year! > > 1. Lets please move all the fedora builders to F31. >- There can be some warnings with F31, so we

[Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2019-12-22 Thread Yaniv Kaul
I could not find a relevant use for them. Can anyone enlighten me? TIA, Y. ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday

[Gluster-devel] Potential impact of Cloudsync on posix performance

2019-12-17 Thread Yaniv Kaul
I'm looking at the code, and I'm seeing calls everywhere to posix_cs_maintenance(). perhaps we should add to the volume configuration some boolean if cloudsync feature is even enabled for that volume? https://review.gluster.org/#/c/glusterfs/+/23576/ is a very modest effort to reduce the impact,

[Gluster-devel] cache-swift-metadata should be switched to default OFF #775

2019-12-01 Thread Yaniv Kaul
I've opened an issue[1] upstream on the above, as part of my effort to reduce unused attributes. The plan is not to remove it (vs. other attributes that really are unused) but rather change the default so the majority of the users will enjoy best experience. I'd be happy to receive comments (how

Re: [Gluster-devel] Glusterfs crash when enable quota on Arm aarch 64platform.

2019-11-29 Thread Yaniv Kaul
Does it happen on master? On Fri, 29 Nov 2019, 12:06 Xie Changlong wrote: > Hi, PSC > > We encounter the same issue a few month ago, and git bisect says the first > bad commit is 2fb445ba. This patch is not quota related, but it addressed > the quota issue! > > Maybe it's gcc issue?? > > commit

Re: [Gluster-devel] Modifying gluster's logging mechanism

2019-11-22 Thread Yaniv Kaul
On Fri, Nov 22, 2019 at 11:45 AM Barak Sason Rofman wrote: > Thank you for your input Atin and Xie Changlong. > > Regarding log ordering - my initial thought was to do it offline using a > dedicated too. Should be straight forward, as the logs have time stamp > composed of seconds and

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus

2019-11-21 Thread Yaniv Kaul
On Fri, 22 Nov 2019, 5:03 Xie Changlong wrote: > > 在 2019/11/22 5:14, Kaleb Keithley 写道: > > I personally wouldn't call three years ago — when we started to deprecate > it, in glusterfs-3.9 — a recent change. > > As a community the decision was made to move to NFS-Ganesha as the > preferred NFS

Re: [Gluster-devel] Proposal to change gNFS status

2019-11-21 Thread Yaniv Kaul
On Thu, Nov 21, 2019 at 12:31 PM Amar Tumballi wrote: > Hi All, > > As per the discussion on https://review.gluster.org/23645, recently we > changed the status of gNFS (gluster's native NFSv3 support) feature to > 'Depricated / Orphan' state. (ref: >

[Gluster-devel] Why is sys_lgetxattr() done under LOCK() in posix_skip_non_linkto_unlink() ?

2019-10-18 Thread Yaniv Kaul
Can anyone see why we are doing a disk metadata IO under lock here? LOCK(>inode->lock); xattr_size = sys_lgetxattr(real_path, linkto_xattr, NULL, 0); if (xattr_size <= 0) skip_unlink = _gf_true; UNLOCK(>inode->lock); TIA, Y.

Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-10-14 Thread Yaniv Kaul
On Mon, 14 Oct 2019, 17:01 Amar Tumballi wrote: > > > On Mon, 14 Oct, 2019, 5:37 PM Niels de Vos, wrote: > >> On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote: >> > Any thoughts on this? >> > >> > I tried a basic .travis.yml for the unified glusterfs repo I am >> > maintaining, and

[Gluster-devel] Question on dht_filter_loc_subvol_key() function (in dht-helper.c)

2019-10-04 Thread Yaniv Kaul
I'm reading the function, and unsure what's going on there, perhaps I'm missing something: name_len = strlen(loc->name) - keylen; new_name = GF_MALLOC(name_len + 1, gf_common_mt_char); snprintf(new_name, name_len + 1, "%s", loc->name); How exactly is there enough space to snprintf loc->name into

Re: [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-08-23 Thread Yaniv Kaul
On Fri, 23 Aug 2019, 9:13 Amar Tumballi wrote: > Hi developers, > > With this email, I want to understand what is the general feeling around > this topic. > > We from gluster org (in github.com/gluster) have many projects which > follow complete github workflow, where as there are few, specially

Re: [Gluster-devel] Assistance setting up Gluster

2019-07-21 Thread Yaniv Kaul
On Mon, Jul 22, 2019 at 1:20 AM Barak Sason wrote: > Hello everyone, > > My name is Barak and I'll soon be joining the Gluster development team as > a part of Red Hat. > Hello and welcome to the Gluster community. > > As a preparation for my upcoming employment I've been trying to get >

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-06-10 Thread Yaniv Kaul
On Mon, Jun 10, 2019 at 10:43 AM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi, > > How about this patch? I see there is a failed test, is that related to my > change? > Quite likely. Have you looked at the failure? It produces a stack which looks close to where

[Gluster-devel] Test failed - due to out of memory on builder201?

2019-06-10 Thread Yaniv Kaul
>From [1], we can see that non-root-unlink-stale-linkto.t failed on: useradd: /etc/passwd.30380: Cannot allocate memory useradd: cannot lock /etc/passwd; try again later. My patch[2] only removed include statements that were not needed. I'm not sure how it can cause a memory issue. So it's either

[Gluster-devel] CI failure - NameError: name 'unicode' is not defined (related to changelogparser.py)

2019-06-06 Thread Yaniv Kaul
>From [1]. I think it's a Python2/3 thing, so perhaps a CI issue additionally (though if our code is not Python 3 ready, let's ensure we use Python 2 explicitly until we fix this). *00:47:05.207* ok 14 [ 13/386] < 34> 'gluster --mode=script --wignore volume start patchy'*00:47:05.207*

Re: [Gluster-devel] [Gluster-infra] rebal-all-nodes-migrate.t always fails now

2019-06-04 Thread Yaniv Kaul
rit : > > On Fri, 5 Apr 2019 at 12:16, Michael Scherer > > wrote: > > > > > Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit : > > > > Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit : > > > > > I'm not convinced this

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-04-24 Thread Yaniv Kaul
On Tue, Apr 23, 2019 at 5:15 PM Michael Scherer wrote: > Le lundi 22 avril 2019 à 22:57 +0530, Atin Mukherjee a écrit : > > Is this back again? The recent patches are failing regression :-\ . > > So, on builder206, it took me a while to find that the issue is that > nfs (the service) was

Re: [Gluster-devel] [Gluster-infra] rebal-all-nodes-migrate.t always fails now

2019-04-05 Thread Yaniv Kaul
On Fri, Apr 5, 2019 at 9:55 AM Deepshikha Khandelwal wrote: > > > On Fri, Apr 5, 2019 at 12:16 PM Michael Scherer > wrote: > >> Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit : >> > Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit : >>

Re: [Gluster-devel] [Gluster-infra] rebal-all-nodes-migrate.t always fails now

2019-04-04 Thread Yaniv Kaul
I'm not convinced this is solved. Just had what I believe is a similar failure: *00:12:02.532* A dependency job for rpc-statd.service failed. See 'journalctl -xe' for details.*00:12:02.532* mount.nfs: rpc.statd is not running but is required for remote locking.*00:12:02.532* mount.nfs: Either use

Re: [Gluster-devel] [Gluster-users] Prioritise local bricks for IO?

2019-04-02 Thread Yaniv Kaul
On Tue, Apr 2, 2019 at 6:37 PM Nux! wrote: > Ok, cool, thanks. So.. no go. > > Any other ideas on how to accomplish task then? > While not a solution, I believe https://review.gluster.org/#/c/glusterfs/+/21333/ - read selection based on latency, is an interesting path towards this. (Of course,

Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Yaniv Kaul
On Thu, Mar 21, 2019 at 5:43 PM Yaniv Kaul wrote: > > > On Thu, Mar 21, 2019 at 5:23 PM Nithya Balachandran > wrote: > >> >> >> On Thu, 21 Mar 2019 at 16:16, Atin Mukherjee wrote: >> >>> All, >>> >>> In the last few releases o

Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Yaniv Kaul
On Thu, Mar 21, 2019 at 5:23 PM Nithya Balachandran wrote: > > > On Thu, 21 Mar 2019 at 16:16, Atin Mukherjee wrote: > >> All, >> >> In the last few releases of glusterfs, with stability as a primary theme >> of the releases, there has been lots of changes done on the code >> optimization with

Re: [Gluster-devel] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Yaniv Kaul
On Thu, Mar 21, 2019 at 12:45 PM Atin Mukherjee wrote: > All, > > In the last few releases of glusterfs, with stability as a primary theme > of the releases, there has been lots of changes done on the code > optimization with an expectation that such changes will have gluster to > provide better

[Gluster-devel] Coverity scan is back

2019-03-19 Thread Yaniv Kaul
After a long shutdown, the upstream Coverity scan for Gluster is back[1]. We were last time measured (January) @ 61 issues and went up to 93 items. Overall, we are still at a very good 0.16 defect density, but we've regressed a bit. Y. [1]

[Gluster-devel] What happened to 'gluster-ansible' RPM?

2019-03-03 Thread Yaniv Kaul
I've used[1] it to deploy Gluster (on CentOS 7) and now it seems to be missing. I'm not seeing this meta package @ [2] - am I supposed to install the specific packages? TIA, Y. [1] https://github.com/mykaul/vg/blob/d36ad9948a1be49be5b7f7d95a2007aa8c540d95/ansible/machine_config.yml#L75 [2]

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-15 Thread Yaniv Kaul
On Tue, Jan 15, 2019 at 10:35 AM Xavi Hernandez wrote: > On Mon, Jan 14, 2019 at 11:08 AM Ashish Pandey > wrote: > >> >> I downloaded logs of regression runs 1077 and 1073 and tried to >> investigate it. >> In both regression ec/bug-1236065.t is hanging on TEST 70 which is >> trying to get the

[Gluster-devel] Missing UNLOCK in gid_cache_lookup() ?

2018-12-01 Thread Yaniv Kaul
I'm looking at the code and I'm not sure if there's a missing UNLOCK there (location marked in bold red) for cache->gc_lock : LOCK(>gc_lock); bucket = id % cache->gc_nbuckets; agl = BUCKET_START(cache->gc_cache, bucket); for (i = 0; i < AUX_GID_CACHE_ASSOC; i++, agl++) {

Re: [Gluster-devel] GD2 & glusterfs smoke issue

2018-11-08 Thread Yaniv Kaul
On Tue, Nov 6, 2018 at 11:34 AM Atin Mukherjee wrote: > We have enabled GD2 smoke results as a mandatory vote in glusterfs smoke > since yesterday through BZ [1], however we just started seeing GD2 smoke > failing which means glusterfs smoke on all the patches will not go through > at this

Re: [Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2018-11-06 Thread Yaniv Kaul
On Tue, Nov 6, 2018, 5:51 PM Atin Mukherjee new defects introduced in posix xlator. > The other one in iobuf.c, is mine. Sent a patch to fix. Y. > -- Forwarded message - > From: > Date: Tue, Nov 6, 2018 at 8:44 PM > Subject: New Defects reported by Coverity Scan for

Re: [Gluster-devel] Building GD2 from glusterd2-v5.0-0-vendor.tar.xz fails on CentOS-7

2018-10-30 Thread Yaniv Kaul
On Tue, Oct 30, 2018, 9:15 AM Kaushal M wrote: > On Tue, Oct 30, 2018 at 11:50 AM Kaushal M wrote: > > > > On Tue, Oct 30, 2018 at 2:20 AM Niels de Vos wrote: > > > > > > Hi, > > > > > > not sure what is going wrong when building GD2 for the CentOS Storage > > > SIG, but it seems to fail with

Re: [Gluster-devel] Maintainer meeting minutes : 1st Oct, 2018

2018-10-01 Thread Yaniv Kaul
On Tue, Oct 2, 2018, 8:21 AM Amar Tumballi wrote: > BJ Link > >- Bridge: https://bluejeans.com/217609845 >- Watch: https://bluejeans.com/s/eNNfZ > > Attendance > >- Jonathan (loadtheacc), Vijay Baskar, Amar Tumballi,

Re: [Gluster-devel] gluster-ansible: status of the project

2018-10-01 Thread Yaniv Kaul
On Mon, Oct 1, 2018, 10:45 PM Vijay Bellur wrote: > Thank you for this report! > > On Fri, Sep 28, 2018 at 4:34 AM Sachidananda URS wrote: > >> Hi, >> >> gluster-ansible project is aimed at automating the deployment and >> maintenance of GlusterFS cluster. >> >> The project can be found at: >>

Re: [Gluster-devel] gluster-ansible: status of the project

2018-09-30 Thread Yaniv Kaul
On Fri, Sep 28, 2018 at 2:33 PM Sachidananda URS wrote: > Hi, > > gluster-ansible project is aimed at automating the deployment and > maintenance of GlusterFS cluster. > > The project can be found at: > > * https://github.com/gluster/gluster-ansible > *

Re: [Gluster-devel] [ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-29 Thread Yaniv Kaul
was not aware it's related to formatting. Y. > > On Sat, Sep 29, 2018 at 12:57 PM Yaniv Kaul wrote: > >> >> >> On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo wrote: >> >>> Hi, >>> >>> Gobinda, great work! >>> >>> One thing though - t

Re: [Gluster-devel] [ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-29 Thread Yaniv Kaul
On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo wrote: > Hi, > > Gobinda, great work! > > One thing though - the device names (sda, sdb etc..) > > On many servers, it's hard to know which disk is which. Lets say I have 10 > spinning disk + 2 SSD's. Which is sda? what about NVME? worse - sometimes >

Re: [Gluster-devel] Proposal to change Gerrit -> Bugzilla updates

2018-09-10 Thread Yaniv Kaul
On Mon, Sep 10, 2018 at 3:37 PM, Nigel Babu wrote: > Hello folks, > > We now have review.gluster.org as an external tracker on Bugzilla. Our > current automation when there is a bugzilla attached to a patch is as > follows: > > 1. When a new patchset has "Fixes: bz#1234" or "Updates: bz#1234",

Re: [Gluster-devel] [Gluster-infra] Reboot policy for the infra

2018-08-23 Thread Yaniv Kaul
On Thu, Aug 23, 2018 at 10:49 AM, Michael Scherer wrote: > Le jeudi 23 août 2018 à 11:21 +0530, Nigel Babu a écrit : > > One more piece that's missing is when we'll restart the physical > > servers. > > That seems to be entirely missing. The rest looks good to me and I'm > > happy > > to add an

Re: [Gluster-devel] [Gluster-infra] Fwd: Gerrit downtime on Aug 8, 2016

2018-08-08 Thread Yaniv Kaul
On Wed, Aug 8, 2018 at 1:28 PM, Deepshikha Khandelwal wrote: > Gerrit is now upgraded to the newer version and is back online. > Nice, thanks! I'm trying out the new UI. Needs getting used to, I guess. Have we upgraded to NotesDB? Y. > > Please file a bug if you face any issue. > On Tue, Aug

Re: [Gluster-devel] Release 5: Master branch health report (Week of 30th July)

2018-08-07 Thread Yaniv Kaul
On Tue, Aug 7, 2018, 10:46 PM Shyam Ranganathan wrote: > On 08/07/2018 02:58 PM, Yaniv Kaul wrote: > > The intention is to stabilize master and not add more patches that my > > destabilize it. > > > > > > https://review.gluster.org/#/c/20603/ has bee

Re: [Gluster-devel] Release 5: Master branch health report (Week of 30th July)

2018-08-07 Thread Yaniv Kaul
On Mon, Aug 6, 2018 at 1:24 AM, Shyam Ranganathan wrote: > On 07/31/2018 07:16 AM, Shyam Ranganathan wrote: > > On 07/30/2018 03:21 PM, Shyam Ranganathan wrote: > >> On 07/24/2018 03:12 PM, Shyam Ranganathan wrote: > >>> 1) master branch health checks (weekly, till branching) > >>> - Expect

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-03 Thread Yaniv Kaul
Why not revert, fix and resubmit (unless you can quickly fix it)? Y. On Fri, Aug 3, 2018, 5:04 PM Raghavendra Gowdappa wrote: > Will take a look. > > On Fri, Aug 3, 2018 at 3:08 PM, Krutika Dhananjay > wrote: > >> Adding Raghavendra G who actually restored and reworked on this after it >> was

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Yaniv Kaul
On Tue, Jul 24, 2018, 7:20 PM Pranith Kumar Karampuri wrote: > hi, > Quite a few commands to monitor gluster at the moment take almost a > second to give output. > Some categories of these commands: > 1) Any command that needs to do some sort of mount/glfs_init. > Examples: 1) heal

Re: [Gluster-devel] [Gluster-infra] bug-1432542-mpx-restart-crash.t failing

2018-07-09 Thread Yaniv Kaul
On Mon, Jul 9, 2018, 5:41 PM Nithya Balachandran wrote: > We discussed reducing the number of volumes in the maintainers' > meeting.Should we still go ahead and do that? > Do we know how much will it save us? There is value in some moderate number of volumes (especially if we can ensure they

Re: [Gluster-devel] compilation failure: rpcsvc.c:1003:9: error: implicit declaration of function 'xdr_sizeof' [-Werror=implicit-function-declaration]

2018-07-01 Thread Yaniv Kaul
On Fri, Jun 29, 2018 at 10:32 PM, Vijay Bellur wrote: > > > On Fri, Jun 29, 2018 at 11:59 AM Yaniv Kaul wrote: > >> Hello, >> >> First time trying to compile from source. >> >> The compilation part of 'make install' fails with: >> rpcsvc.c:528:17

[Gluster-devel] compilation failure: rpcsvc.c:1003:9: error: implicit declaration of function 'xdr_sizeof' [-Werror=implicit-function-declaration]

2018-06-29 Thread Yaniv Kaul
Hello, First time trying to compile from source. The compilation part of 'make install' fails with: rpcsvc.c:528:17: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'u_int32_t' [-Wformat=] gf_log (GF_RPCSVC, GF_LOG_ERROR, "auth failed

Re: [Gluster-devel] [rhhi-dev] Performance drops between Hypervisors in Gluster backed RHEV / RHS.

2018-01-17 Thread Yaniv Kaul
On Wed, Jan 17, 2018 at 8:19 PM, Ben Turner wrote: > Hi all. I am seeing the strangest thing and I have no explanation and I > was hoping I could get your input here. I have a RHEV / RHS setup(non > RHHI, just traditional) where we have two hypervisors with the exact same >