[Gluster-devel] gluster_gd2-nightly-rpms - do we need to continue to build for this?

2020-02-17 Thread Sankarshan Mukhopadhyay
There is no practical work being done on gd2, do we need to continue
to have a build job?

On Tue, 18 Feb 2020 at 05:46,  wrote:
>
> See <https://ci.centos.org/job/gluster_gd2-nightly-rpms/643/display/redirect>
>



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Community Meeting: Make it more reachable

2020-01-29 Thread Sankarshan Mukhopadhyay
On Wed, Jan 29, 2020, 17:14 Sunny Kumar  wrote:

> On Wed, Jan 29, 2020 at 9:50 AM Sankarshan Mukhopadhyay
>  wrote:
> >
> > On Wed, 29 Jan 2020 at 15:03, Sunny Kumar  wrote:
> > >
> > > Hello folks,
> > >
> > > I would like to propose moving community meeting to a time which is
> > > more suitable for EMEA/NA zone, that is merging both of the zone
> > > specific meetings into a single one.
> > >
> >
> > I am aware that there are 2 sets of meetings now - APAC and EMEA/NA.
> > This came about to ensure that users and community at these TZs have
> > the opportunity to participate in time slices that are more
> > comfortable. I have never managed to attend a NA/EMEA instance - is
> > that not seeing enough participation? There is only a very thin time
>
> I usually join but no one turns ups in meeting, I guess there is some
> sort of problem most likely no one is aware/hosting/communication gap.
>

There certainly seems to be. Thanks for highlighting this situation.


> > slice that overlaps APAC, EMEA and NA. If we want to do this, we would
> > need to have a doodle/whenisgood set of options to see how this pans
> > out.
>
> Yes, we have to come up with a time which can cover most of TZs.
>

Would be sending out a possible time slice set so that we can see how it is
accepted by those who are the regulars?

In the meanwhile we can use the hackpad notes and the Slack channels to
ensure that the communication is improved.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Community Meeting: Make it more reachable

2020-01-29 Thread Sankarshan Mukhopadhyay
On Wed, 29 Jan 2020 at 15:03, Sunny Kumar  wrote:
>
> Hello folks,
>
> I would like to propose moving community meeting to a time which is
> more suitable for EMEA/NA zone, that is merging both of the zone
> specific meetings into a single one.
>

I am aware that there are 2 sets of meetings now - APAC and EMEA/NA.
This came about to ensure that users and community at these TZs have
the opportunity to participate in time slices that are more
comfortable. I have never managed to attend a NA/EMEA instance - is
that not seeing enough participation? There is only a very thin time
slice that overlaps APAC, EMEA and NA. If we want to do this, we would
need to have a doodle/whenisgood set of options to see how this pans
out.

> It will be really helpful for people who wish to join form these
> meetings form other time zones and will help users to collaborate with
> developers in APAC region.
>
> Please share your thoughts.
>
> /sunny
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

2020-01-28 Thread Sankarshan Mukhopadhyay
We talked about a set of planned work underway which bring about
substantial improvements in ls-l and similar workloads.

Do we have the (a) data from the test runs to be shared more widely
(b) the patchsets and issues to track this work?
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] reminder: release 8 release notes tracker

2020-01-28 Thread Sankarshan Mukhopadhyay
This note is a reminder to add the topics which are being proposed to
be included in the release notes. As part of a previous meeting, there
is now an issue which tracks this
https://github.com/gluster/glusterfs/issues/813
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [cross posted] Gluster Community Meeting; component updates and untriaged bugs

2020-01-13 Thread Sankarshan Mukhopadhyay
In a previous instance of the APAC meeting of the Gluster community I
had mentioned about looking at the component update section and
considering providing a report on the untriaged bug load. The
rationale is to ensure that instead of a large mess of "unknown" bugs,
the maintainers (and thus the project) take a look at the weekly
report that is generated and begin to take steps to control the
growing number of untriaged one.

In the meeting today/14Jan, Hari pointed out that the request should
be more widely circulated than an entry in the meeting minutes. And
hence this note. The triage activity would also help frame the
upcoming release(s) in context of how many reported bugs were
addressed in the release and other meta-data around the same eg. how
long it took the project to get it into a release.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Following up on the Gluster 8 conversation from the community meeting

2020-01-09 Thread Sankarshan Mukhopadhyay
I had mentioned that I'll reach out to Amar to urge about making
progress on Gluster 8. We did have a conversation and upon reviewing
the present state of the list of issues/features with the release
label, Amar would be setting up a meeting to obtain a more pragmatic
and reality based plan. It is somewhat obvious that a number of items
listed are not feasible within the current release timeline. Please
wait for the meeting request from Amar.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Modifying gluster's logging mechanism

2019-11-26 Thread Sankarshan Mukhopadhyay
On Wed, Nov 27, 2019 at 7:44 AM Amar Tumballi  wrote:

> Hi Barak,
>
> My replies inline.
>
> On Thu, Nov 21, 2019 at 6:34 PM Barak Sason Rofman 
> wrote:
>
>>
>>
[snip]


>
>> Initial tests, done by *removing logging from the regression testing,
>> shows an improvement of about 20% in run time*. This indicates we’re
>> taking a pretty heavy performance hit just because of the logging activity.
>>
>>
> That is interesting observation. For this alone, can we have an option to
> disable all logging during regression? That would fasten up things for
> normal runs immediately.
>

If having quicker regression runs is the objective, then perhaps we should
not look at turning off logging to accomplish them. Instead, there ought to
be additional aspects which can be reviewed with turning off logging being
the last available option.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Modifying gluster's logging mechanism

2019-11-24 Thread Sankarshan Mukhopadhyay
The original proposal from Barak has merit for planning towards an
alpha form to be available. Has the project moved away from writing
specifications and reviewing those proposals? Earlier, we used to see
those rather than discuss in multi-threaded list emails. While
recorded gains in performance are notable, it would be prudent to
cleanly assess the switch-over cost should the project want to move
over to the new patterns of logging. This seems like a good time to
plan well for a change that has both utility and value.

That said, I'd like to point out some relevant aspects. Logging is
just one (although an important one) part of what is being talked
about as o11y (observability). A log (or, a structured log) is a
record of events. Debugging of distributed systems require
understanding of an unit of work which flowed through a system firing
off events which in turn were recorded. Logs thus are often portions
of events which are part of this unit of work (assume this is a
"transaction" if that helps grasp it better). Or, in other words, logs
are portions of the abstraction ie. events. The key aspect I'd like to
highlight is that (re)designing structured logging in isolation from
o11y principles will not work as we see more customers adopt o11y
patterns, tools within their SRE and other emerging sub-teams.
Focusing just on logs keeps us rooted to the visual display of
information via ELK/EFK models rather than consider the behavior
centric diagnosis of the whole system.



On Fri, Nov 22, 2019 at 3:49 PM Ravishankar N  wrote:
>
>
> On 22/11/19 3:13 pm, Barak Sason Rofman wrote:
> > This is actually one of the main reasons I wanted to bring this up for
> > discussion - will it be fine with the community to run a dedicated
> > tool to reorder the logs offline?
>
> I think it is a bad idea to log without ordering and later relying on an
> external tool to sort it.  This is definitely not something I would want
> to do while doing test and development or debugging field issues.
> Structured logging  is definitely useful for gathering statistics and
> post-processing to make reports and charts and what not,  but from a
> debugging point of view, maintaining causality of messages and working
> with command line text editors and tools on log files is important IMO.
>
> I had a similar concerns when  brick multiplexing feature was developed
> where a single log file was used for logging all multiplexed bricks'
> logs.  So much extra work to weed out messages of 99 processes to read
> the log of the 1 process you are interested in.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 5.10

2019-10-29 Thread Sankarshan Mukhopadhyay
Would it be necessary to update the 5.10 release notes with this being
a "known issue"?

On Tue, Oct 29, 2019 at 11:07 AM Hari Gowtham  wrote:
>
> Hi,
>
> Hubert, I can see the 5.9 folder contains 5.9 packages.
>
> Alan, We just made a release for 5.10.So we can take this in for 5.11.
> Can you please backport the patch to the release-5 branch so that we
> can review and take it in for 5.11?
>
> On Mon, Oct 28, 2019 at 4:45 PM Alan Orth  wrote:
> >
> > Dear list,
> >
> > I hope that this readdirp issue that causes sporadic "permission denied" 
> > errors can be backported to release-5, as it's already in master and 
> > backported to release-6. There's a perfect reproducer for this issue in the 
> > bugzilla that currently works on Gluster 5.10 (been having it for a few 
> > months and happy to find the cause!).
> >
> > See: https://github.com/gluster/glusterfs/issues/711
> > See: https://bugzilla.redhat.com/show_bug.cgi?id=1668286
> >
> > Thank you!
> >
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-24 Thread Sankarshan Mukhopadhyay
On Fri, Aug 23, 2019 at 6:42 PM Amar Tumballi  wrote:
>
> Hi developers,
>
> With this email, I want to understand what is the general feeling around this 
> topic.
>
> We from gluster org (in github.com/gluster) have many projects which follow 
> complete github workflow, where as there are few, specially the main one 
> 'glusterfs', which uses 'Gerrit'.
>
> While this has worked all these years, currently, there is a huge set of 
> brain-share on github workflow as many other top projects, and similar 
> projects use only github as the place to develop, track and run tests etc. As 
> it is possible to have all of the tools required for this project in github 
> itself (code, PR, issues, CI/CD, docs), lets look at how we are structured 
> today:
>
> Gerrit - glusterfs code + Review system
> Bugzilla - For bugs
> Github - For feature requests
> Trello - (not very much used) for tracking project development.
> CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo.
> Docs - glusterdocs - different repo.
> Metrics - Nothing (other than github itself tracking contributors).
>
> While it may cause a minor glitch for many long time developers who are used 
> to the flow, moving to github would bring all these in single place, makes 
> getting new users easy, and uniform development practices for all gluster org 
> repositories.
>
> As it is just the proposal, I would like to hear people's thought on this, 
> and conclude on this another month, so by glusterfs-8 development time, we 
> are clear about this.
>

I'd want to propose that a decision be arrived at much earlier. Say,
within a fortnight ie. mid-Sep. I do not see why this would need a
whole month to consider. Such a timeline would also allow to manage
changes after proper assessment of sub-tasks.

> Can we decide on this before September 30th? Please voice your concerns.
>
> Regards,
> Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] More intelligent file distribution across subvols of DHT when file size is known

2019-06-05 Thread Sankarshan Mukhopadhyay
On Wed, May 22, 2019 at 6:53 PM Krutika Dhananjay  wrote:
>
> Hi,
>
> I've proposed a solution to the problem of space running out in some children 
> of DHT even when its other children have free space available, here - 
> https://github.com/gluster/glusterfs/issues/675.
>
> The proposal aims to solve a very specific instance of this generic class of 
> problems where fortunately the size of the file that is getting created is 
> known beforehand.
>
> Requesting feedback on the proposal or even alternate solutions, if you have 
> any.

There has not been much commentary on this issue in the last 10 odd
days. What is the next step?
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Following up on the "Github teams/repo cleanup"

2019-03-28 Thread Sankarshan Mukhopadhyay
On Fri 29 Mar, 2019, 01:04 Vijay Bellur,  wrote:

>
>
> On Thu, Mar 28, 2019 at 11:39 AM Sankarshan Mukhopadhyay <
> sankarshan.mukhopadh...@gmail.com> wrote:
>
>> On Thu, Mar 28, 2019 at 11:34 PM John Strunk  wrote:
>> >
>> > Thanks for bringing this to the list.
>> >
>> > I think this is a good set of guidelines, and we should publicly post
>> and enforce them once agreement is reached.
>> > The integrity of the gluster github org is important for the future of
>> the project.
>> >
>>
>> I agree. And so, I am looking forward to additional
>> individuals/maintainers agreeing to this so that we can codify it
>> under the Gluster.org Github org too.
>>
>
>
> Looks good to me.
>
> While at this, I would also like us to think about evolving some
> guidelines for creating a new repository in the gluster github
> organization. Right now, a bug report does get a new repository created and
> I feel that the process could be a bit more involved to ensure that we
> host projects with the right content in github.
>

The bug ensures that there is a recorded trail for the request. What might
be the additional detail required which can emphasize on greater
involvement? At this point, I don't see many fly-by-night projects. Just
inactive ones and those too for myriad reasons.

>

>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Following up on the "Github teams/repo cleanup"

2019-03-28 Thread Sankarshan Mukhopadhyay
On Thu, Mar 28, 2019 at 11:34 PM John Strunk  wrote:
>
> Thanks for bringing this to the list.
>
> I think this is a good set of guidelines, and we should publicly post and 
> enforce them once agreement is reached.
> The integrity of the gluster github org is important for the future of the 
> project.
>

I agree. And so, I am looking forward to additional
individuals/maintainers agreeing to this so that we can codify it
under the Gluster.org Github org too.

>
> On Wed, Mar 27, 2019 at 10:21 PM Sankarshan Mukhopadhyay 
>  wrote:
>>
>> The one at 
>> <https://lists.gluster.org/pipermail/gluster-infra/2018-June/004589.html>
>> I am not sure if the proposal from Michael was agreed to separately
>> and it was done. Also, do we want to do this periodically?
>>
>> Feedback is appreciated.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Following up on the "Github teams/repo cleanup"

2019-03-27 Thread Sankarshan Mukhopadhyay
The one at 

I am not sure if the proposal from Michael was agreed to separately
and it was done. Also, do we want to do this periodically?

Feedback is appreciated.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-22 Thread Sankarshan Mukhopadhyay
On Wed, Mar 20, 2019 at 10:00 AM Atin Mukherjee  wrote:
>
> From glusterd perspective couple of enhancements I'd propose to be added (a) 
> to capture get-state dump and make it part of sosreport . Off late, we have 
> seen get-state dump has been very helpful in debugging few cases apart from 
> it's original purpose of providing source of cluster/volume information for 
> tendrl (b) capture glusterd statedump
>

How large can these payloads be? One of the challenges I've heard is
that users are often challenged when attempting to push large ( > 5GB)
payloads making the total size of the sos archive fairly big.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Sankarshan Mukhopadhyay
On Thu, Mar 21, 2019 at 9:24 PM Yaniv Kaul  wrote:

>> Smallfile is part of CI? I am happy to see it documented @ 
>> https://docs.gluster.org/en/latest/Administrator%20Guide/Performance%20Testing/#smallfile-distributed-io-benchmark
>>  , so at least one can know how to execute it manually.
>
>
> Following up the above link to the smallfile repo leads to 404 (I'm assuming 
> we don't have a link checker running on our documentation, so it can break 
> from time to time?)

Hmm... that needs to be addressed.

> I assume it's https://github.com/distributed-system-analysis/smallfile ?

Yes.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] requesting review available gluster* plugins in sos

2019-03-18 Thread Sankarshan Mukhopadhyay
 is (as might just be widely known)
an extensible, portable, support data collection tool primarily aimed
at Linux distributions and other UNIX-like operating systems.

At present there are 2 plugins

and 
I'd like to request that the maintainers do a quick review that this
sufficiently covers topics to help diagnose issues.

This is a lead up to requesting more usage of the sos tool to diagnose
issues we see reported.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Github#268 Compatibility with Alpine Linux

2019-03-11 Thread Sankarshan Mukhopadhyay
Saw some recent activity on
 - is there a plan to
address this or, should the interested users be informed about other
plans?

/s
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] 8/10 AWS jenkins builders disconnected

2019-03-06 Thread Sankarshan Mukhopadhyay
On Wed, Mar 6, 2019 at 5:38 PM Deepshikha Khandelwal
 wrote:
>
> Hello,
>
> Today while debugging the centos7-regression failed builds I saw most of the 
> builders did not pass the instance status check on AWS and were unreachable.
>
> Misc investigated this and came to know about the patch[1] which seems to 
> break the builder one after the other. They all ran the regression test for 
> this specific change before going offline.
> We suspect that this change do result in infinite loop of processes as we did 
> not see any trace of error in the system logs.
>
> We did reboot all those builders and they all seem to be running fine now.
>

The question though is - what to do about the patch, if the patch
itself is the root cause? Is this assigned to anyone to look into?

> Please let us know if you see any such issues again.
>
> [1] https://review.gluster.org/#/c/glusterfs/+/22290/


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-31 Thread Sankarshan Mukhopadhyay
On Fri 28 Dec, 2018, 12:44 Raghavendra Gowdappa 
>
> On Mon, Dec 24, 2018 at 6:05 PM Raghavendra Gowdappa 
> wrote:
>
>>
>>
>> On Mon, Dec 24, 2018 at 3:40 PM Sankarshan Mukhopadhyay <
>> sankarshan.mukhopadh...@gmail.com> wrote:
>>
>>> [pulling the conclusions up to enable better in-line]
>>>
>>> > Conclusions:
>>> >
>>> > We should never have a volume with caching-related xlators disabled.
>>> The price we pay for it is too high. We need to make them work consistently
>>> and aggressively to avoid as many requests as we can.
>>>
>>> Are there current issues in terms of behavior which are known/observed
>>> when these are enabled?
>>>
>>
>> We did have issues with pgbench in past. But they've have been fixed.
>> Please refer to bz [1] for details. On 5.1, it runs successfully with all
>> caching related xlators enabled. Having said that the only performance
>> xlators which gave improved performance were open-behind and write-behind
>> [2] (write-behind had some issues, which will be fixed by [3] and we'll
>> have to measure performance again with fix to [3]).
>>
>
> One quick update. Enabling write-behind and md-cache with fix for [3]
> reduced the total time taken for pgbench init phase roughly by 20%-25%
> (from 12.5 min to 9.75 min for a scale of 100). Though this is still a huge
> time (around 12hrs for a db of scale 8000). I'll follow up with a detailed
> report once my experiments are complete. Currently trying to optimize the
> read path.
>
>
>> For some reason, read-side caching didn't improve transactions per
>> second. I am working on this problem currently. Note that these bugs
>> measure transaction phase of pgbench, but what xavi measured in his mail is
>> init phase. Nevertheless, evaluation of read caching (metadata/data) will
>> still be relevant for init phase too.
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1512691
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1629589#c4
>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1648781
>>
>
I think that what I am looking forward to is a well defined set of next
steps and potential update to this list windows that eventually result in a
formal and recorded procedure to ensure that Gluster performs best for
these application workloads.


>>
>>
>>> > We need to analyze client/server xlators deeper to see if we can avoid
>>> some delays. However optimizing something that is already at the
>>> microsecond level can be very hard.
>>>
>>> That is true - are there any significant gains which can be accrued by
>>> putting efforts here or, should this be a lower priority?
>>>
>>
>> The problem identified by xavi is also the one we (Manoj, Krutika, me and
>> Milind) had encountered in the past [4]. The solution we used was to have
>> multiple rpc connections between single brick and client. The solution
>> indeed fixed the bottleneck. So, there is definitely work involved here -
>> either to fix the single connection model or go with multiple connection
>> model. Its preferred to improve single connection and resort to multiple
>> connections only if bottlenecks in single connection are not fixable.
>> Personally I think this is high priority along with having appropriate
>> client side caching.
>>
>> [4] https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c52
>>
>>
>>> > We need to determine what causes the fluctuations in brick side and
>>> avoid them.
>>> > This scenario is very similar to a smallfile/metadata workload, so
>>> this is probably one important cause of its bad performance.
>>>
>>> What kind of instrumentation is required to enable the determination?
>>>
>>> On Fri, Dec 21, 2018 at 1:48 PM Xavi Hernandez 
>>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I've done some tracing of the latency that network layer introduces in
>>> gluster. I've made the analysis as part of the pgbench performance issue
>>> (in particulat the initialization and scaling phase), so I decided to look
>>> at READV for this particular workload, but I think the results can be
>>> extrapolated to other operations that also have small latency (cached data
>>> from FS for example).
>>> >
>>> > Note that measuring latencies introduces some latency. It consists in
>>> a call to clock_get_time() for each probe point, so the real latency will
>>> be a bit lower, but still proportional to these numbers.
>>> >
>>>
>>> [snip]
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Implementing multiplexing for self heal client.

2018-12-24 Thread Sankarshan Mukhopadhyay
On Fri, Dec 21, 2018 at 6:30 PM RAFI KC  wrote:
>
> Hi All,
>
> What is the problem?
> As of now self-heal client is running as one daemon per node, this means
> even if there are multiple volumes, there will only be one self-heal
> daemon. So to take effect of each configuration changes in the cluster,
> the self-heal has to be reconfigured. But it doesn't have ability to
> dynamically reconfigure. Which means when you have lot of volumes in the
> cluster, every management operation that involves configurations changes
> like volume start/stop, add/remove brick etc will result in self-heal
> daemon restart. If such operation is executed more often, it is not only
> slow down self-heal for a volume, but also increases the slef-heal logs
> substantially.

What is the value of the number of volumes when you write "lot of
volumes"? 1000 volumes, more etc

>
>
> How to fix it?
>
> We are planning to follow a similar procedure as attach/detach graphs
> dynamically which is similar to brick multiplex. The detailed steps is
> as below,
>
>
>
>
> 1) First step is to make shd per volume daemon, to generate/reconfigure
> volfiles per volume basis .
>
>1.1) This will help to attach the volfiles easily to existing shd daemon
>
>1.2) This will help to send notification to shd daemon as each
> volinfo keeps the daemon object
>
>1.3) reconfiguring a particular subvolume is easier as we can check
> the topology better
>
>1.4) With this change the volfiles will be moved to workdir/vols/
> directory.
>
> 2) Writing new rpc requests like attach/detach_client_graph function to
> support clients attach/detach
>
>2.1) Also functions like graph reconfigure, mgmt_getspec_cbk has to
> be modified
>
> 3) Safely detaching a subvolume when there are pending frames to unwind.
>
>3.1) We can mark the client disconnected and make all the frames to
> unwind with ENOTCONN
>
>3.2) We can wait all the i/o to unwind until the new updated subvol
> attaches
>
> 4) Handle scenarios like glusterd restart, node reboot, etc
>
>
>
> At the moment we are not planning to limit the number of heal subvolmes
> per process as, because with the current approach also for every volume
> heal was doing from a single process. We have not heared any major
> complains on this?

Is the plan to not ever limit or, have a throttle set to a default
high(er) value? How would system resources be impacted if the proposed
design is implemented?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-24 Thread Sankarshan Mukhopadhyay
[pulling the conclusions up to enable better in-line]

> Conclusions:
>
> We should never have a volume with caching-related xlators disabled. The 
> price we pay for it is too high. We need to make them work consistently and 
> aggressively to avoid as many requests as we can.

Are there current issues in terms of behavior which are known/observed
when these are enabled?

> We need to analyze client/server xlators deeper to see if we can avoid some 
> delays. However optimizing something that is already at the microsecond level 
> can be very hard.

That is true - are there any significant gains which can be accrued by
putting efforts here or, should this be a lower priority?

> We need to determine what causes the fluctuations in brick side and avoid 
> them.
> This scenario is very similar to a smallfile/metadata workload, so this is 
> probably one important cause of its bad performance.

What kind of instrumentation is required to enable the determination?

On Fri, Dec 21, 2018 at 1:48 PM Xavi Hernandez  wrote:
>
> Hi,
>
> I've done some tracing of the latency that network layer introduces in 
> gluster. I've made the analysis as part of the pgbench performance issue (in 
> particulat the initialization and scaling phase), so I decided to look at 
> READV for this particular workload, but I think the results can be 
> extrapolated to other operations that also have small latency (cached data 
> from FS for example).
>
> Note that measuring latencies introduces some latency. It consists in a call 
> to clock_get_time() for each probe point, so the real latency will be a bit 
> lower, but still proportional to these numbers.
>

[snip]
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Sankarshan Mukhopadhyay
On Thu, Oct 4, 2018 at 6:10 AM Sanju Rakonde  wrote:
> On Wed, Oct 3, 2018 at 3:26 PM Deepshikha Khandelwal  
> wrote:
>>
>> Hello folks,
>>
>> Distributed-regression job[1] is now a part of Gluster's
>> nightly-master build pipeline. The following are the issues we have
>> resolved since we started working on this:
>>
>> 1) Collecting gluster logs from servers.
>> 2) Tests failed due to infra-related issues have been fixed.
>> 3) Time taken to run regression testing reduced to ~50-60 minutes.
>>
>> To get time down to 40 minutes needs your help!
>>
>> Currently, there is a test that is failing:
>>
>> tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t
>>
>> This needs fixing first.
>
>
> Where can I get the logs of this test case? In 
> https://build.gluster.org/job/distributed-regression/264/console I see this 
> test case is failed and re-attempted. But I couldn't find logs.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Glusto Happenings

2018-09-17 Thread Sankarshan Mukhopadhyay
On Tue, Sep 18, 2018 at 6:07 AM Amye Scavarda  wrote:
>
> Adding Maintainers as that's the group that will be more interested in this.
> Our next maintainers meeting is October 1st, want to present on what the 
> current status is there?

I'd like to have some links to tracking mechanisms eg. Github issues
and such for the topics that have been already mentioned below as well
as any Gluster specific changes that are coming up in versions. The
intent is to allow the maintainers to anticipate and plan for any
changes and discuss those.

> On Mon, Sep 17, 2018 at 12:29 AM Jonathan Holloway  
> wrote:
>>
>> Hi Gluster-devel,
>>
>> It's been awhile, since we updated gluster-devel on things related to Glusto.
>>
>> The big thing in the works for Glusto is Python3 compatibility.
>> A port is in progress, and the target is October to have a branch ready for 
>> testing. Look for another update here when that is available.
>>
>> Thanks to Vijay Avuthu for testing a change to the Python2 version of 
>> Carteplex (the cartesian product module in Glusto that drives the runs_on 
>> decorator used in Gluster tests). Tests inheriting from GlusterBaseClass 
>> have been using im_func to make calls against the base class setUp method. 
>> This change allows the use of super() as well as im_func.
>>
>> On a related note, the syntax for both im_func and super() changes in 
>> Python3. The "Developer Guide for Tests and Libraries" section of the 
>> glusterfs/glusto-tests docs currently shows 
>> "GlusterBaseClass.setUp.im_func(self)", but will be updated with the 
>> preferred call for Python3.
>>
>> And lastly, you might have seen an issue with tests under Python2 where a 
>> run kicked off via py.test or /usr/bin/glusto would immediately fail with a 
>> message indicating gcc needs to be installed. The problem was specific to a 
>> recent update of PyTest and scandir, and the original workaround was to 
>> install gcc or a previous version of pytest and scandir. The scandir 
>> maintainer fixed the issue upstream with scandir 1.9.0 (available in PyPI).
>>
>> That's all for now.
>>
>> Cheers,
>> Jonathan (loadtheacc)
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python components and test coverage

2018-08-10 Thread Sankarshan Mukhopadhyay
On Fri, Aug 10, 2018 at 5:47 PM, Nigel Babu  wrote:
> Hello folks,
>
> We're currently in a transition to python3. Right now, there's a bug in one
> piece of this transition code. I saw Nithya run into this yesterday. The
> challenge here is, none of our testing for python2/python3 transition
> catches this bug. Both Pylint and the ast-based testing that Kaleb
> recommended does not catch this bug. The bug is trivial and would take 2
> mins to fix, the challenge is that until we exercise almost all of these
> code paths from both Python3 and Python2, we're not going to find out that
> there are subtle breakages like this.
>

Where is this great reveal - what is this above mentioned bug?

> As far as I know, the three pieces where we use Python are geo-rep,
> glusterfind, and libgfapi-python. My question:
> * Are there more places where we run python?
> * What sort of automated test coverage do we have for these components right
> now?
> * What can the CI team do to help identify problems? We have both Centos7
> and Fedora28 builders, so we can definitely help run tests specific to
> python.
>
> --
> nigelb
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Documentation Hackathon - 7/19 through 7/23

2018-08-06 Thread Sankarshan Mukhopadhyay
Was a round-up/summary about this published to the lists?

On Wed, Jul 18, 2018 at 10:27 PM, Vijay Bellur  wrote:
> Hey All,
>
> We are organizing a hackathon to improve our upstream documentation. More
> details about the hackathon can be found at [1].
>
> Please feel free to let us know if you have any questions.
>
> Thanks,
> Amar & Vijay
>
> [1]
> https://docs.google.com/document/d/11LLGA-bwuamPOrKunxojzAEpHEGQxv8VJ68L3aKdPns/edit?usp=sharing
>



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-05 Thread Sankarshan Mukhopadhyay
On Mon, Aug 6, 2018 at 5:17 AM, Amye Scavarda  wrote:
>
>
> On Sun, Aug 5, 2018 at 3:24 PM Shyam Ranganathan 
> wrote:
>>
>> On 07/31/2018 07:16 AM, Shyam Ranganathan wrote:
>> > On 07/30/2018 03:21 PM, Shyam Ranganathan wrote:
>> >> On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
>> >>> 1) master branch health checks (weekly, till branching)
>> >>>   - Expect every Monday a status update on various tests runs
>> >> See https://build.gluster.org/job/nightly-master/ for a report on
>> >> various nightly and periodic jobs on master.
>> > Thinking aloud, we may have to stop merges to master to get these test
>> > failures addressed at the earliest and to continue maintaining them
>> > GREEN for the health of the branch.
>> >
>> > I would give the above a week, before we lockdown the branch to fix the
>> > failures.
>> >
>> > Let's try and get line-coverage and nightly regression tests addressed
>> > this week (leaving mux-regression open), and if addressed not lock the
>> > branch down.
>> >
>>
>> Health on master as of the last nightly run [4] is still the same.
>>
>> Potential patches that rectify the situation (as in [1]) are bunched in
>> a patch [2] that Atin and myself have put through several regressions
>> (mux, normal and line coverage) and these have also not passed.
>>
>> Till we rectify the situation we are locking down master branch commit
>> rights to the following people, Amar, Atin, Shyam, Vijay.
>>
>> The intention is to stabilize master and not add more patches that my
>> destabilize it.
>>
>> Test cases that are tracked as failures and need action are present here
>> [3].
>>
>> @Nigel, request you to apply the commit rights change as you see this
>> mail and let the list know regarding the same as well.
>>
>> Thanks,
>> Shyam
>>
>> [1] Patches that address regression failures:
>> https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
>>
>> [2] Bunched up patch against which regressions were run:
>> https://review.gluster.org/#/c/20637
>>
>> [3] Failing tests list:
>>
>> https://docs.google.com/spreadsheets/d/1IF9GhpKah4bto19RQLr0y_Kkw26E_-crKALHSaSjZMQ/edit?usp=sharing
>>
>> [4] Nightly run dashboard: https://build.gluster.org/job/nightly-master/

>
> Locking master is fine, this seems like there's been ample notice and
> conversation.
> Do we have test criteria to indicate when we're unlocking master? X amount
> of tests passing, Y amount of bugs?

The "till we rectify" might just include 3 days of the entire set of
tests passing - thinking out loud here.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Sankarshan Mukhopadhyay
On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar
 wrote:
> I am facing different issue in softserve machines. The fuse mount itself is
> failing.
> I tried day before yesterday to debug geo-rep failures. I discussed with
> Raghu,
> but could not root cause it. So none of the tests were passing. It happened
> on
> both machine instances I tried.
>

Ugh! -infra team should have an issue to work with and resolve this.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Sankarshan Mukhopadhyay
On Thu, Aug 2, 2018 at 12:19 AM, Shyam Ranganathan  wrote:
> On 07/31/2018 12:41 PM, Atin Mukherjee wrote:
>> tests/bugs/core/bug-1432542-mpx-restart-crash.t - Times out even after
>> 400 secs. Refer
>> https://fstat.gluster.org/failure/209?state=2_date=2018-06-30_date=2018-07-31=all,
>> specifically the latest report
>> https://build.gluster.org/job/regression-test-burn-in/4051/consoleText .
>> Wasn't timing out as frequently as it was till 12 July. But since 27
>> July, it has timed out twice. Beginning to believe commit
>> 9400b6f2c8aa219a493961e0ab9770b7f12e80d2 has added the delay and now 400
>> secs isn't sufficient enough (Mohit?)
>
> The above test is the one that is causing line coverage to fail as well
> (mostly, say 50% of the time).
>
> I did have this patch up to increase timeouts and also ran a few rounds
> of tests, but results are mixed. It passes when run first, and later
> errors out in other places (although not timing out).
>
> See: https://review.gluster.org/#/c/20568/2 for the changes and test run
> details.
>

If I may ask - why are we always exploring the "increase timeout" part
of this? I understand that some tests may take longer - but 400s is
quite a non-trivial amount of time - what other efficient means are we
not able to explore?

> The failure of this test in regression-test-burn-in run#4051 is strange
> again, it looks like the test completed within stipulated time, but
> restarted again post cleanup_func was invoked.
>
> Digging a little further the manner of cleanup_func and traps used in
> this test seem *interesting* and maybe needs a closer look to arrive at
> possible issues here.
>
> @Mohit, request you to take a look at the line coverage failures as
> well, as you handle the failures in this test.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-07-31 Thread Sankarshan Mukhopadhyay
On Tue, Jul 31, 2018 at 4:46 PM, Shyam Ranganathan  wrote:
> On 07/30/2018 03:21 PM, Shyam Ranganathan wrote:
>> On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
>>> 1) master branch health checks (weekly, till branching)
>>>   - Expect every Monday a status update on various tests runs
>>
>> See https://build.gluster.org/job/nightly-master/ for a report on
>> various nightly and periodic jobs on master.
>

This doesn't look like how things are expected to be.

> Thinking aloud, we may have to stop merges to master to get these test
> failures addressed at the earliest and to continue maintaining them
> GREEN for the health of the branch.
>
> I would give the above a week, before we lockdown the branch to fix the
> failures.
>

Is 1 week a sufficient estimate to address the issues?

> Let's try and get line-coverage and nightly regression tests addressed
> this week (leaving mux-regression open), and if addressed not lock the
> branch down.
>
>>
>> RED:
>> 1. Nightly regression (3/6 failed)
>> - Tests that reported failure:
>> ./tests/00-geo-rep/georep-basic-dr-rsync.t
>> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t
>> ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
>> ./tests/bugs/distribute/bug-1122443.t
>>
>> - Tests that needed a retry:
>> ./tests/00-geo-rep/georep-basic-dr-tarssh.t
>> ./tests/bugs/glusterd/quorum-validation.t
>>
>> 2. Regression with multiplex (cores and test failures)
>>
>> 3. line-coverage (cores and test failures)
>> - Tests that failed:
>> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t (patch
>> https://review.gluster.org/20568 does not fix the timeout entirely, as
>> can be seen in this run,
>> https://build.gluster.org/job/line-coverage/401/consoleFull )
>>
>> Calling out to contributors to take a look at various failures, and post
>> the same as bugs AND to the lists (so that duplication is avoided) to
>> get this to a GREEN status.
>>
>> GREEN:
>> 1. cpp-check
>> 2. RPM builds
>>
>> IGNORE (for now):
>> 1. clang scan (@nigel, this job requires clang warnings to be fixed to
>> go green, right?)
>>
>> Shyam
-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Sankarshan Mukhopadhyay
On Wed, Jul 25, 2018 at 11:53 PM, Yaniv Kaul  wrote:
>
>
> On Tue, Jul 24, 2018, 7:20 PM Pranith Kumar Karampuri 
> wrote:
>>
>> hi,
>>   Quite a few commands to monitor gluster at the moment take almost a
>> second to give output.
>> Some categories of these commands:
>> 1) Any command that needs to do some sort of mount/glfs_init.
>>  Examples: 1) heal info family of commands 2) statfs to find
>> space-availability etc (On my laptop replica 3 volume with all local bricks,
>> glfs_init takes 0.3 seconds on average)
>> 2) glusterd commands that need to wait for the previous command to unlock.
>> If the previous command is something related to lvm snapshot which takes
>> quite a few seconds, it would be even more time consuming.
>>
>> Nowadays container workloads have hundreds of volumes if not thousands. If
>> we want to serve any monitoring solution at this scale (I have seen
>> customers use upto 600 volumes at a time, it will only get bigger) and lets
>> say collecting metrics per volume takes 2 seconds per volume(Let us take the
>> worst example which has all major features enabled like
>> snapshot/geo-rep/quota etc etc), that will mean that it will take 20 minutes
>> to collect metrics of the cluster with 600 volumes. What are the ways in
>> which we can make this number more manageable? I was initially thinking may
>> be it is possible to get gd2 to execute commands in parallel on different
>> volumes, so potentially we could get this done in ~2 seconds. But quite a
>> few of the metrics need a mount or equivalent of a mount(glfs_init) to
>> collect different information like statfs, number of pending heals, quota
>> usage etc. This may lead to high memory usage as the size of the mounts tend
>> to be high.
>>
>> I wanted to seek suggestions from others on how to come to a conclusion
>> about which path to take and what problems to solve.
>
>
> I would imagine that in gd2 world:
> 1. All stats would be in etcd.
> 2. There will be a single API call for GetALLVolumesStats or something and
> we won't be asking the client to loop, or there will be a similar efficient
> single API to query and deliver stats for some volumes in a batch ('all
> bricks in host X' for example).
>

Single end point for metrics/monitoring was a topic that was not
agreed upon at <https://github.com/gluster/glusterd2/issues/538>

> Worth looking how it's implemented elsewhere in K8S.
>
> In any case, when asking for metrics I assume the latest already available
> would be returned and we are not going to fetch them when queried. This is
> both fragile (imagine an entity that doesn't respond well) and adds latency
> and will be inaccurate anyway a split second later.
>
> Y.



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-24 Thread Sankarshan Mukhopadhyay
On Tue, Jul 24, 2018 at 9:48 PM, Pranith Kumar Karampuri
 wrote:
> hi,
>   Quite a few commands to monitor gluster at the moment take almost a
> second to give output.

Is this at the (most) minimum recommended cluster size?

> Some categories of these commands:
> 1) Any command that needs to do some sort of mount/glfs_init.
>  Examples: 1) heal info family of commands 2) statfs to find
> space-availability etc (On my laptop replica 3 volume with all local bricks,
> glfs_init takes 0.3 seconds on average)
> 2) glusterd commands that need to wait for the previous command to unlock.
> If the previous command is something related to lvm snapshot which takes
> quite a few seconds, it would be even more time consuming.
>
> Nowadays container workloads have hundreds of volumes if not thousands. If
> we want to serve any monitoring solution at this scale (I have seen
> customers use upto 600 volumes at a time, it will only get bigger) and lets
> say collecting metrics per volume takes 2 seconds per volume(Let us take the
> worst example which has all major features enabled like
> snapshot/geo-rep/quota etc etc), that will mean that it will take 20 minutes
> to collect metrics of the cluster with 600 volumes. What are the ways in
> which we can make this number more manageable? I was initially thinking may
> be it is possible to get gd2 to execute commands in parallel on different
> volumes, so potentially we could get this done in ~2 seconds. But quite a
> few of the metrics need a mount or equivalent of a mount(glfs_init) to
> collect different information like statfs, number of pending heals, quota
> usage etc. This may lead to high memory usage as the size of the mounts tend
> to be high.
>

I am not sure if starting from the "worst example" (it certainly is
not) is a good place to start from. That said, for any environment
with that number of disposable volumes, what kind of metrics do
actually make any sense/impact?

> I wanted to seek suggestions from others on how to come to a conclusion
> about which path to take and what problems to solve.
>
> I will be happy to raise github issues based on our conclusions on this mail
> thread.
>
> --
> Pranith
>





-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Sankarshan Mukhopadhyay
On Tue, Mar 13, 2018 at 1:05 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan 
> wrote:
>>
>> Hi,
>>
>> As we wind down on 4.0 activities (waiting on docs to hit the site, and
>> packages to be available in CentOS repositories before announcing the
>> release), it is time to start preparing for the 4.1 release.
>>
>> 4.1 is where we have GD2 fully functional and shipping with migration
>> tools to aid Glusterd to GlusterD2 migrations.
>>
>> Other than the above, this is a call out for features that are in the
>> works for 4.1. Please *post* the github issues to the *devel lists* that
>> you would like as a part of 4.1, and also mention the current state of
>> development.
>>
>> Further, as we hit end of March, we would make it mandatory for features
>> to have required spec and doc labels, before the code is merged, so
>> factor in efforts for the same if not already done.
>
>
> Could you explain the point above further? Is it just the label or the
> spec/doc
> that we need merged before the patch is merged?
>

I'll hazard a guess that the intent of the label is to indicate
availability of the doc. "Completeness" of code is being defined as
including specifications and documentation.

That said, I'll wait for Shyam to be more elaborate on this.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: [CentOS-devel] ansible in CentOS Extras

2017-09-08 Thread Sankarshan Mukhopadhyay
I'm not sure if Gluster has any material impact. Would request the
Gluster project members who are also part of the CentOS Storage SIG to
confirm and/or respond to Karanbir.


-- Forwarded message --
From: Karanbir Singh <mail-li...@karan.org>
Date: Fri, Sep 8, 2017 at 4:08 AM
Subject: [CentOS-devel] ansible in CentOS Extras
To: "The CentOS developers mailing list." <centos-de...@centos.org>


hi,

https://git.centos.org/log/rpms!ansible.git/c7-extras

is now in CentOS-Extras/ - this is going to impact every SIG that uses
ansible in and from cbs.centos.org, or needs specific versions pin'd for
their roles ( paas / cloud sig etc )

is there anything the SIG's need to validate before this content hits
mirror.centos.org in a few days time ?

--
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-29 Thread Sankarshan Mukhopadhyay
On Wed, Aug 30, 2017 at 6:03 AM, Atin Mukherjee  wrote:
>
> On Wed, 30 Aug 2017 at 00:23, Shwetha Panduranga 
> wrote:
>>
>> Hi Shyam, we are already doing it. we wait for rebalance status to be
>> complete. We loop. we keep checking if the status is complete for '20'
>> minutes or so.
>
>
> Are you saying in this test rebalance status was executed multiple times
> till it succeed? If yes then the test shouldn't have failed. Can I get to
> access the complete set of logs?

Would you not prefer to look at the specific test under discussion as well?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-17 Thread Sankarshan Mukhopadhyay
On Thu, Aug 17, 2017 at 12:46 PM, Kaushal M  wrote:
> On Thu, Aug 3, 2017 at 2:12 PM, Milind Changire  wrote:
>>
>>
>> On Thu, Aug 3, 2017 at 12:56 PM, Kaushal M  wrote:
>>>
>>> On Thu, Aug 3, 2017 at 2:14 AM, Niels de Vos  wrote:
>>> > On Wed, Aug 02, 2017 at 05:03:35PM +0530, Prashanth Pai wrote:
>>> >> Hi all,
>>> >>
>>> >> The ongoing work on glusterd2 necessitates following non-breaking and
>>> >> non-exhaustive list of changes to glusterfs source code:
>>> >>
>>> >> Port management
>>> >> - Remove hard-coding of glusterd's port as 24007 in clients and
>>> >> elsewhere.
>>> >>   Glusterd2 can be configured to listen to clients on any port (still
>>> >> defaults to
>>> >>   24007 though)
>>> >> - Let the bricks and daemons choose any available port and if needed
>>> >> report
>>> >>   the port used to glusterd during the "sign in" process. Prasanna has
>>> >> a
>>> >> patch
>>> >>   to do this.
>>> >> - Glusterd <--> brick (or any other local daemon) communication should
>>> >>   always happen over Unix Domain Socket. Currently glusterd and brick
>>> >>   process communicates over UDS and also port 24007. This will allow us
>>> >>   to set better authentication and rules for port 24007 as it shall
>>> >> only be
>>> >> used
>>> >>   by clients.
>>> >
>>> > I prefer this last point to be configurable. At least for debugging we
>>> > should be able to capture network traces and display the communication
>>> > in Wireshark. Defaulting to UNIX Domain Sockets is fine though.
>>>
>>> This is the communication between GD2 and bricks, of which there is
>>> not a lot happening, and not much to capture.
>>> But I agree, it will be nice to have this configurable.
>>>
>>
>> Could glusterd start attempting port binding at 24007 and progress on to
>> higher port numbers until successful and register the bound port number with
>> rpcbind ? This way the setup will be auto-configurable and admins need not
>> scratch their heads to decide upon one port number. Gluster clients could
>> always talk to rpcbind on the nodes to get glusterd service port whenever a
>> reconnect is required.
>
> 24007 has always been used as the GlusterD port. There was a plan to
> have it registered with IANA as well.
> Having a well defined port is useful to allow proper firewall rules to be 
> setup.

I seem to recall asking about this in another thread - is anyone
planning to follow through with the registration?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Fwd: tendrl-release v1.5.0 (Milestone1) is available

2017-08-05 Thread Sankarshan Mukhopadhyay
On Sat, Aug 5, 2017 at 8:50 AM, Vijay Bellur  wrote:
> Thank you Rohan for letting us know about this release!
>
> I looked for screenshots of the dashboard and found a few at [1]. Are there
> more screenshots that you can share with us?
>

The Metrics page (URL below) was intended to track the metrics
available in the build Rohan announced. This is good feedback and yes,
we can add screenshots to it.

The Tendrl project is going to improve on the install+configure
experience by switching over to an Ansible based model, the current
set of steps will then be substantially reduced.

> [1] https://github.com/Tendrl/documentation/wiki/Metrics
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 8:50 PM, Niels de Vos <nde...@redhat.com> wrote:

[much snipping]

> A but related to this, but is Stack Overflow not *the* place to ask and
> answer questions? There even is a "glusterfs" tag, and questions and
> answers can be marked even better than with GitHub (imho):
>
> https://stackoverflow.com/questions/tagged/glusterfs
>

Alright. The intended outcome (from how I comprehend this conversation
thread) is that developers/maintainers desire to respond more
(quickly/efficiently) to queries from users in the community. The way
this outcome is achieved is when the
developers/maintainers/those-who-know respond more often. The tooling
or, medium is not what is the crucial part here. However, it becomes
critical when there are far too many of avenues and too little
attention on a regular basis. *That* is a far from ideal situation.

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 8:21 PM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:
> People who respond on gluster-users do this already. Github issues is a
> better tool to do this (Again IMHO). It is a bit easier to know what
> questions are still open to be answered with github. Even after multiple
> responses on gluster-users mail you won't be sure if it is answered or not,
> so one has to go through the responses. Where as on github we can close it
> once answered. So these kinds of things are the reason for asking this.

Even with Github Issues, Gluster will not see a reduction of traffic
to -users/-devel asking questions about the releases. What will happen
for a (short?) while is that the project will have two inbound avenues
to look for questions and respond to them. Over a period of time, if
extreme focus and diligence is adopted, the traffic on mailing list
*might* move over to Github issues. There are obvious advantages to
using Issues. But there are also a small number of must-do things to
manage this approach.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 7:21 PM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:
>
>
> On Tue, May 9, 2017 at 7:09 PM, Sankarshan Mukhopadhyay
> <sankarshan.mukhopadh...@gmail.com> wrote:
>>
>> On Tue, May 9, 2017 at 3:18 PM, Amar Tumballi <atumb...@redhat.com> wrote:
>> > I personally prefer github questions than mailing list, because a valid
>> > question can later become a reason for a new feature. Also, as you said,
>> > we
>> > can 'assignee' a question and if we start with bug triage we can also
>> > make
>> > sure at least we respond to questions which is pending.
>>
>> Is the on-going discussion in this thread about using
>> <https://github.com/gluster/glusterfs/issues> as a method to have
>> questions and responses from the community?
>
>
> yeah, this is something users have started doing. It seems better than mail
> (at least to me).
>

There is a trend of projects who are moving to Github Issues as a
medium/platform for responding to queries. As Amar mentions earlier in
the thread and Shyam implies - this requires constant (ie. daily)
vigil and attention. If those are in place, it is a practical move.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 3:18 PM, Amar Tumballi <atumb...@redhat.com> wrote:
> I personally prefer github questions than mailing list, because a valid
> question can later become a reason for a new feature. Also, as you said, we
> can 'assignee' a question and if we start with bug triage we can also make
> sure at least we respond to questions which is pending.

Is the on-going discussion in this thread about using
<https://github.com/gluster/glusterfs/issues> as a method to have
questions and responses from the community?



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Events API new Port requirement

2016-09-08 Thread Sankarshan Mukhopadhyay
On Fri, Sep 9, 2016 at 3:55 AM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, Sep 08, 2016 at 10:03:03PM +0530, Sankarshan Mukhopadhyay wrote:
>> On Sun, Aug 28, 2016 at 2:13 PM, Niels de Vos <nde...@redhat.com> wrote:
>> > This definitely has my preference too. I've always wanted to try to
>> > register port 24007/8, and maybe the time has come to look into it.
>>
>> Has someone within the project previously undertaken the process of
>> requesting the IANA for assignment of a new service name and port
>> number value?
>
> Not that I know. I think Amar had an interest several years ago, but
> I've never seen any requests or results.
>
> There are some Red Hat colleagues that might have experience with this.
> I can ask and see if they can provide any guidance. But if there is
> someone very interested and eager to request these ports, please go
> ahead (and CC me on the communications).
>

If you can reach out and obtain the necessary information, it would be
a good first step.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Events API new Port requirement

2016-09-08 Thread Sankarshan Mukhopadhyay
On Sun, Aug 28, 2016 at 2:13 PM, Niels de Vos <nde...@redhat.com> wrote:
> This definitely has my preference too. I've always wanted to try to
> register port 24007/8, and maybe the time has come to look into it.

Has someone within the project previously undertaken the process of
requesting the IANA for assignment of a new service name and port
number value?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Post mortem of features that didn't make it to 3.9.0

2016-09-06 Thread Sankarshan Mukhopadhyay
On Wed, Sep 7, 2016 at 5:10 AM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:

>  Do you think it makes sense to do post-mortem of features that didn't
> make it to 3.9.0? We have some features that missed deadlines twice as well,
> i.e. planned for 3.8.0 and didn't make it and planned for 3.9.0 and didn't
> make it. So may be we are adding features to roadmap without thinking things
> through? Basically it leads to frustration in the community who are waiting
> for these components and they keep moving to next releases.

Doing a post-mortem to understand the pieces which went well (so that
we can continue doing them); which didn't go well (so that we can
learn from those) and which were impediments (so that we can address
the topics and remove them) is an useful exercise.

> Please let me know your thoughts. Goal is to get better at planning and
> deliver the features as planned as much as possible. Native subdirectoy
> mounts is in same situation which I was supposed to deliver.
>
> I have the following questions we need to ask ourselves the following
> questions IMO:

Incident based post-mortems require a timeline. However, while the
need for that might be unnecessary here, the questions are perhaps too
specific. Also, it would be good to set up the expectation from the
exercise - what would all the inputs lead to?

> 1) Did we have approved design before we committed the feature upstream for
> 3.9?
> 2) Did we allocate time for execution of this feature upstream?
> 3) Was the execution derailed by any of the customer issues/important work
> in your organizatoin?
> 4) Did developers focus on something that is not of priority which could
> have derailed the feature's delivery?
> 5) Did others in the team suspect the developers are not focusing on things
> that are of priority but didn't communicate?
> 6) Were there any infra issues that delayed delivery of this
> feature(regression failures etc)?
> 7) Were there any big delays in reviews of patches?
>
> Do let us know if you think we should ask more questions here.
>
> --
> Aravinda & Pranith



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regular Performance Regression Testing

2016-08-29 Thread Sankarshan Mukhopadhyay
On Mon, Aug 29, 2016 at 9:16 PM, Vijay Bellur <vbel...@redhat.com> wrote:
> I would also recommend running perf-test.sh [1] for regression.
>

Would it be useful to have this script maintained as part of the
Gluster organization? Improvements/changes could perhaps be more
easily tracked.

> [1] https://github.com/avati/perf-test/blob/master/perf-test.sh




-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Backup support for GlusterFS

2016-08-19 Thread Sankarshan Mukhopadhyay
On Fri, Aug 19, 2016 at 2:53 PM, Alok Srivastava <asriv...@redhat.com> wrote:
> the proposed approach is based on NDMP + Snapshots, Hence it's not a one
> size fits all approach. However, Making use of the snapshots will ensure
> that  a point in time copy is migrated and the in-flight directories are
> also accessible to the clients connected to the source.

Perhaps this is when the developers of snapshot feature would need to
chime in on the possible sequence of things to be done.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [gluster-devel] Documentation Tooling Review

2016-08-12 Thread Sankarshan Mukhopadhyay
On Fri, Aug 12, 2016 at 1:53 AM, Amye Scavarda <a...@redhat.com> wrote:

[snip]

> I'm happy to take comments on this proposal. Over the next week, I'll be
> reviewing the level of effort it would take to migrate to ASCIIdocs and
> ASCIIbinder, with the goal being to have this in place by end of September.

This is a good start.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposing a framework to leverage existing Python unit test standards for our testing

2016-08-02 Thread Sankarshan Mukhopadhyay
On Sat, Jul 30, 2016 at 5:26 AM, Jonathan Holloway <jhollo...@redhat.com> wrote:

[snip]

> I've posted several videos on YouTube at the following links.
> There are eight sections and then a combined full-length (really full)
> video.
> There is a little bit of Unit Test covered in "3. Using Glusto Overview",
> but the "8. Running Unit Tests" shows more depth (sample PyUnit format w/
> some PyTest, Gluster runs-on and reuse-setup example, filtering test runs,
> etc.).
> If you're looking to skip around, you might start with "1. Intro, 2. Using
> Glusto Overview, and "8. Running Unit Tests"--then pick and choose from
> there.

I finally had a bit of time to go through the videos in sequence. And
I'd like to put out a note of thanks for compiling them for everyone
in the project to take a look at.

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] performance issues Manoj found in EC testing

2016-06-27 Thread Sankarshan Mukhopadhyay
On Mon, Jun 27, 2016 at 2:38 PM, Manoj Pillai <mpil...@redhat.com> wrote:
> Thanks, folks! As a quick update, throughput on a single client test jumped
> from ~180 MB/s to 700+MB/s after enabling client-io-threads. Throughput is
> now more in line with what is expected for this workload based on
> back-of-the-envelope calculations.

Is it possible to provide additional detail about this exercise in
terms of setup; tests executed; data sets generated?

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release Management Process change - proposal

2016-05-29 Thread Sankarshan Mukhopadhyay
On Mon, May 30, 2016 at 7:07 AM, Vijay Bellur <vbel...@redhat.com> wrote:
> Since we do not have any objections to this proposal, let us do the
> following for 3.7.12:
>
> 1. Treat June 1st as the cut-off for patch acceptance in release-3.7.
> 2. I will tag 3.7.12rc1 on June 2nd.
> 3. All maintainers to ack content and stability of components by June 9th.
> 4. Release 3.7.12 around June 9th after we have all acks in place.

Would 3 days be sufficient to get the content for the website and a
bit of PR written up?

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Idea: Alternate Release process

2016-05-29 Thread Sankarshan Mukhopadhyay
;>>>>> Do you like the idea? Let me know what you guys think.
>>>>>>
>>>>> This reduces the number of versions that we need to maintain,
>>>>> which I like.
>>>>> Having official test (beta) releases should help get features
>>>>> out to
>>>>> testers hand faster,
>>>>> and get quicker feedback.
>>>>>
>>>>> One thing that's still not quite clear to is the issue of backwards
>>>>> compatibility.
>>>>> I'm still thinking it thorough and don't have a proper answer to
>>>>> this yet.
>>>>> Would a new release be backwards compatible with the previous
>>>>> release?
>>>>> Should we be maintaining compatibility with LTS releases with the
>>>>> latest release?
>>>>
>>>> Each LTS release will have seperate list of features to be
>>>> enabled. If we make any breaking changes(which are not backward
>>>> compatible) then it will affect LTS releases as you mentioned.
>>>> But we should not break compatibility unless it is major version
>>>> change like 4.0. I have to workout how we can handle backward
>>>> incompatible changes.
>>>>
>>>>> With our current strategy, we at least have a long term release
>>>>> branch,
>>>>> so we get some guarantees of compatibility with releases on the
>>>>> same branch.
>>>>>
>>>>> As I understand the proposed approach, we'd be replacing a stable
>>>>> branch with the beta branch.
>>>>> So we don't have a long-term release branch (apart from LTS).
>>>>
>>>> Stable branch is common for LTS releases also. Builds will be
>>>> different using different list of features.
>>>>
>>>> Below example shows stable release once in 6 weeks, and two LTS
>>>> releases in 6 months gap(3.8 and 3.12)
>>>>
>>>> LTS 1 : 3.83.8.1  3.8.2  3.8.3   3.8.4   3.8.5...
>>>> LTS 2 :  3.123.12.1...
>>>> Stable: 3.83.93.10   3.113.123.13...
>>>>>
>>>>> A user would be upgrading from one branch to another for every
>>>>> release.
>>>>> Can we sketch out how compatibility would work in this case?
>>>>
>>>> User will not upgrade from one branch to other branch, If user
>>>> interested in stable channel then upgrade once in 6 weeks. (Same
>>>> as minor update in current release style)
>>>>>
>>>>>
>>>>> This approach work well for projects like Chromium and Firefox,
>>>>> single
>>>>> system apps
>>>>>   which generally don't need to be compatible with the previous
>>>>> release.
>>>>> I don't understand how the Rust  project uses this (I am yet to
>>>>> read
>>>>> the linked blog post),
>>>>> as it requires some sort of backwards compatibility. But it too
>>>>> is a
>>>>> single system app,
>>>>> and doesn't have the compatibility problems we face.
>>>>>
>>>>> Gluster is a distributed system, that can involve multiple
>>>>> different
>>>>> versions interacting with each other.
>>>>> This is something we need to think about.
>>>>
>>>> I need to think about compatibility, What new problems about the
>>>> compatibility with this approach compared to our existing release
>>>> plan?
>>>>>
>>>>>
>>>>> We could work out some sort of a solution for this though.
>>>>> It might be something very obvious I'm missing right now.
>>>>>
>>>>> ~kaushal
>>>>>
>>>>>> --
>>>>>> regards
>>>>>> Aravinda

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Suggest a feature redirect

2016-05-24 Thread Sankarshan Mukhopadhyay
On Wed, May 18, 2016 at 2:43 PM, Amye Scavarda <a...@redhat.com> wrote:

> Said another way, if you wanted to be able to have someone contribute a
> feature idea, what would be the best way?
> Bugzilla? A google form? An email into the -devel list?
>

Whether it is a new BZ/RFE; a line item on a spreadsheet (via a form)
or, an email to the list - the incoming bits would need to be reviewed
and decided upon. A path which enables the Gluster community to
showcase in a public manner the received ideas and how they have
enabled features in the product would be the best thing to do.

The individual contributing the feature idea needs to know that
something was done with it and it wasn't a /dev/null


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing the size of the glusterdocs git repository

2016-05-12 Thread Sankarshan Mukhopadhyay
On Thu, May 12, 2016 at 3:55 PM, Kaushal M <kshlms...@gmail.com> wrote:
> If required we could just host the presentations on download.gluster.org.
> I've seen it being used to host resources for tutorials previously
> (like disk images),

I'd put forward the notion that download.gluster.org should ideally
remain for binaries (install-ready objects) than presentations. There
are specialized web properties for presentations and video which can
be better ways to do this.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting [was:Re: [Gluster-users] Show and Tell sessions for Gluster 4.0]

2016-05-10 Thread Sankarshan Mukhopadhyay
On Tue, May 10, 2016 at 3:22 PM, Niels de Vos <nde...@redhat.com> wrote:
> The weekly community meetins are visited well, if the discussions there
> are not productive/important enough to have every week, I would suggest
> to use that slot.

I changed the subject to avoid topics being enmeshed. The weekly
meetings on Wednesdays do have a good (as in, more than the bug triage
ones on Tuesdays) participation. I was wondering if the strength of
participation can be used to think over the topics/focus of the
meeting. Often the meetings on Wednesdays (the "community meeting")
brings higher weightage to action items and status updates thereof. To
an extent it is like a weekly standup of the all involved in a
release. Given the recent set of emails who are helping us test new
features (sharding etc) or, specific operators (I borrow the phrase
from OpenStack) - it could be a good idea to create space for
participation - whether in the form of thinking over the agenda or,
times which suit such participants.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Show and Tell sessions for Gluster 4.0

2016-05-09 Thread Sankarshan Mukhopadhyay
On Mon, May 9, 2016 at 11:55 AM, Atin Mukherjee <amukh...@redhat.com> wrote:
> 1. We use the weekly community meeting slot once in a month
> or 2. Have a dedicated separate slot (probably @ 12:00 UTC, last
> Thursday of each month)

[2] would be nicer to set up a calendar for.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Google Summer of Code Application opens Feb 8th - Call For Mentors

2016-02-13 Thread Sankarshan Mukhopadhyay
On Thu, Feb 11, 2016 at 10:38 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, Feb 11, 2016 at 6:41 PM, André Bauer <aba...@magix.net> wrote:
>> Already asked myself why this not exists.
>>
>> so... +1 from me.
>>
>> Regards
>> André
>>
>> Am 04.02.2016 um 16:43 schrieb Niels de Vos:
>>>
>>> How about a project to write a fuse client based on libgfapi and
>>> libfuse instead? That would reduce the need for our fuse-bridge
>>> over time. libgfapi should be ready for this, Samba, NFS-Ganesha
>>> and others already use it heavily.
>>>
>
> +1 from me too. But we need a mentor for this. Neils would you be a
> willing mentor?

If this is the only project idea which exists, the application
organization for GSoC would be somewhat weak. Organizations are
usually expected to come up with a number (more than one) of ideas
which are also attached with (at least) one mentor. More at
<https://flossmanuals.net/GSoCMentoring/making-your-ideas-page/> (as a
manner of guidance)


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] removing "Target Release" from GlusterFS bug reports.

2015-11-24 Thread Sankarshan Mukhopadhyay
On Tue, Nov 24, 2015 at 6:16 PM, Kaleb KEITHLEY <kkeit...@redhat.com> wrote:
> As discussed during today's Bug Triage meeting it is proposed to remove
> the Target Release from all GlusterFS bug reports.
>
> This field is apparently not used by anyone, and it's not described in
> the Bug Triage process at
> http://gluster.readthedocs.org/en/latest/Contributors-Guide/Bug-Triage/
>
> Any objections?

Has it ever been used to add context to a filed bug?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Troubleshooting and Diagnostic tools for Gluster

2015-10-27 Thread Sankarshan Mukhopadhyay
On Mon, Oct 26, 2015 at 7:04 PM, Shyam <srang...@redhat.com> wrote:
> Older idea on this was, to consume the logs and filter based on the message
> IDs for those situations that can be remedied. The logs are hence the point
> where the event for consumption is generated.
>
> Also, the higher level abstraction uses these logs, it can *watch* based on
> message ID filters that are of interest to it, than parse the log message
> entirely to gain insight on the issue.

Are all situations usually atomic? Is it expected to have specific
mapping between an event recorded in a log from one part of an
installed system to a possible symptom? Or, do a collection of events
lead up to an observed failure (which, in turn, is recorded as a
series of events on the logs)?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Troubleshooting and Diagnostic tools for Gluster

2015-10-23 Thread Sankarshan Mukhopadhyay
On Fri, Oct 23, 2015 at 4:16 PM, Aravinda <avish...@redhat.com> wrote:
> Initial idea for Tools Framework:
> -
> A Shell/Python script which looks for the tool in plugins sub directory, if
> exists pass all the arguments and call that script.
>
> `glustertool help` triggers a python Script plugins/help.py which reads
> plugins.yml file to get the list of tools and help messages associated
> with it.
>
> No restrictions on the choice of programming language to create
> tool. It can be bash, Python, Go, Rust, awk, sed etc.
>
> Challenges:
> - Each plugin may have different dependencies, installing all tools
> may install all the dependencies.
> - Multiple programming languages, may be difficult to maintain/build.
> - Maintenance of Third party tools.
> - Creating Plugins registry to discover tools created by other developers.

Diagnostics and remediation become important when a higher level
abstraction (eg. a management construct for Gluster deployments are
involved). What your thoughts on such frameworks being able to consume
the logs; identify the possible issues and recommend a fix/solution?
Does this proposal anticipate such a progression?



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster readdocs unaccessible

2015-10-14 Thread Sankarshan Mukhopadhyay
On Wed, Oct 14, 2015 at 2:03 PM, Avra Sengupta <aseng...@redhat.com> wrote:
> I am unable to access gluster.readdocs.org . Is anyone else facing the same
> issue.

<https://gluster.readthedocs.org/en/latest/> ?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] question about how to handle bugs filed against End-Of-Life versions of glusterfs

2015-09-16 Thread Sankarshan Mukhopadhyay
[for reference]

On Wed, Sep 16, 2015 at 6:35 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com> wrote:
> As an example, Fedora simply closes any remaining open bugs when the
> version reaches EOL. It's incumbent on the person who filed the bug to
> reopen it if it still exists in newer versions.

<https://fedoraproject.org/wiki/BugZappers/HouseKeeping> - All bugs
for EOL releases are automatically closed on the EOL date after
providing a warning in the bug comments, 30 days prior to EOL.

<https://fedoraproject.org/wiki/BugZappers/StockBugzillaResponses#End_of_Life_.28EOL.29_product>
- The bug is reported against a version of Fedora that is no longer
maintained.Thank you for your bug report.We are sorry, but the Fedora
Project is no longer releasing bug fixes or any other updates for this
version of Fedora. This bug will be set to CLOSED:WONTFIX to reflect
this, but please reopen it if the problem persists after upgrading to
the latest version of Fedora, which is available
from:http://fedoraproject.org/get-fedora





-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Cross-posted] Re: Gentle Reminder.. (Was: GlusterFS Documentation Improvements - An Update)

2015-06-23 Thread Sankarshan Mukhopadhyay
On Mon, Jun 22, 2015 at 5:24 PM, Shravan Chandrashekar
schan...@redhat.com wrote:
 We would like to finalize on the documentation contribution workflow by 26th 
 June 2015.
 As we have not yet received any comments/suggestion, we will confirm the 
 recommend workflow after 26th June.


 Kindly provide your suggestion on how we can improve this workflow.

There are a couple of aspects which need to be quickly looked through.

(a) a write-up of somewhat detail providing an overview of the new
workflow; how contributors can participate; reviewing mechanism for
patches against documentation; merge and release paths/cadence

(b) at http://review.gluster.org/#/c/11129/ Niels has a comment
about about design of structures used in the code and how he thinks
that it is appropriate if it stays part of the sources and does not
move out.

He also says For example, I would like to document some of the memory
layout structures and functions, but this documentation will include
source-code comments and a .txt or .md file with additional details.
Spitting that makes it more difficult to keep in sync.

In this particular example, I'd probably say that it would be better
that such documentation is also part of the docs repo. It lends itself
to re-use as and when required (this particular example seems re-use
friendly).

I'd request that this switch-over to the new workflow and repositories
go ahead with the absolute documentation content. Examples/cases
like the above mentioned by Niels can be resolved via discussion and
probably not block the switch.

 Currently, mediawiki is read-only. We have ported most of the documents from 
 mediawiki to the new repository [1].
 If you find any document which is not ported, feel free to raise this by 
 opening an issue in [2] or if you would
 like to port your documents, send a pull request.



 [1] https://github.com/gluster/glusterdocs
 [2] https://github.com/gluster/glusterdocs/issues




-- 
sankarshan mukhopadhyay
https://about.me/sankarshan.mukhopadhyay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS Documentation Improvements - An Update

2015-06-03 Thread Sankarshan Mukhopadhyay
On Wed, May 27, 2015 at 7:48 PM, Humble Devassy Chirammal
humble.deva...@gmail.com wrote:
 Whats changing for community members?

 A very simplified contribution workflow.

 - How to Contribute?

 Contributing to the documentation requires a github account. To edit on
 github, fork the repository (see top-right of the screen, under your
 username). You will then be able to make changes easily. Once done, you can
 create a pull request and get the changes reviewed and merged into the
 official repository.
 With this simplified workflow, the documentation is no longer maintained in
 gluster/glusterfs/docs directory but it has a new elevated status in the
 form of a new project: gluster/glusterdocs
 (https://github.com/gluster/glusterdocs) and currently this project is being
 maintained by Anjana Sriram, Shravan and Humble.

Thanks for writing this up. Can this workflow/sequence be also visited
from the repo of the docs? It would perhaps be required to put this as
How to contribute?

 - What to Contribute

 Really, anything that you think has value to the GlusterFS developer
 community. While reading the docs you might find something incorrect or
 outdated. Fix it! Or maybe you have an idea for a tutorial, or for a topic
 that isn’t covered to your satisfaction. Create a new page and write it up!

+ FAQs and Common Issues/Known Issues etc

 Whats Next?

 Since the GlusterFS documentation has a new face-lift, MediaWiki will no
 longer be editable but will only be READ ONLY view mode. Hence, all the
 work-in-progress design notes which were maintained on MediaWiki will be
 ported to the GitHub repository and placed in Feature Plans folder. So,
 when you want to upload your work in progess documents you must do a pull
 request after the changes are made. This outlines the change in workflow as
 compared to MediaWiki.

It would be of benefit and advantage to have the read-only mode turned
on. This would help avoid 'split-brains' and also create a single
incoming path for content that needs to be maintained as part of the
Documentation community.

 A proposal:

 Another way to maintain work-in-progress documents in Google docs (or any
 other colloborative editing tool) and link them as an index entry in Feature
 Plans page on GitHub. This can be an excellent way to track a document
 through multiple rounds of collaborative editing in real time.

This is a sound idea as well :)


-- 
sankarshan mukhopadhyay
https://about.me/sankarshan.mukhopadhyay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel