Meeting date: 05/02/2018 (May 02nd, 2018), 19:30 IST, 14:00 UTC, 10:00 EDT
BJ Link
* Bridge: https://bluejeans.com/205933580
* Download:
Attendance
* Raghavendra M (Raghavendra Bhat), Kaleb, Atin, Amar, Nithya, Rafi, Shyam
Agenda
*
Commitment (GPLv2 Cure)
I think most of my responses seem to indicate that we should stick to the
current logging mechanism, so let me clarify :)
1) Message ID as one of the KV pairs of the message helps keep this as an
invariant and also helps messages worded similarly to disambiguate themselves
using the message ID
Hi Niels,
1) One of the ways to get this done would be to _somehow_ have a query that
lists patches where the maintainers component files are touched. This would
catch all changes pertaining to the component.
2) Other than this, there could be some repercussion to a component due to
other comm
Hi,
One of the recent problems posted on gluster-devel regarding the ability to
view coredumps that occur during regression testing for patch acceptance was
discussed here [1].
Towards this, the core collection is now modified to collect system libraries
that were used by the executable that c
> From: "Xavier Hernandez"
> On Monday 30 June 2014 16:18:09 Shyamsundar Ranganathan wrote:
> > > Will this "rebalance on access" feature be enabled always or only during
> > > a
> > > brick addition/removal to move files that do not go to
> From: "Xavier Hernandez"
>
> Hi Shyam,
>
> On Thursday 26 June 2014 14:41:13 Shyamsundar Ranganathan wrote:
> > It also touches upon a rebalance on access like mechanism where we could
> > potentially, move data out of existing bricks to a newer brick fast
Wanted to add to the thought process a different angle towards thinking about
the data classified volumes.
One of the reasons for classifying data (be it tiering or others, like high
profile users to high profile storage backends), is to deal with its (i.e data)
protection differently.
With th
Hi,
A feature page for improved rebalance performance is put up here,
http://www.gluster.org/community/documentation/index.php/Features/improve_rebalance_performance
This mail is a request for comments and further ideas in this regard.
In short this aims at improving the existing rebalance perf
Anders,
Please find the modified patch to be applied on master for the SGID bit
propagation issue, https://bugzilla.redhat.com/show_bug.cgi?id=1110262
Other comments inline.
> > DHT winds a call to mkdir as a part of the dht_selfheal_directory (in
> > dht_selfheal_dir_mkdir where it winds a cal
> > > For the short-term, wouldn't it be OK to disallow adding bricks that
> > > is not a multiple of group-size?
> >
> > In the *very* short term, yes. However, I think that will quickly
> > become an issue for users who try to deploy erasure coding because those
> > group sizes will be quite la
Hi Anders,
There are multiple problems that I see in the test provided, here is answering
one of them and the reason why this occurs. It does get into the code and
functions a bit, but bottom line is that on a code path the setattr that DHT
does, misses setting the SGID bit causing the problem
- Original Message -
> From: "Anders Blomdell"
> To: "Niels de Vos"
> Cc: "Shyamsundar Ranganathan" , "Gluster Devel"
> , "Susant Palai"
>
> Sent: Tuesday, June 24, 2014 4:09:52 AM
> Subject: Re: [Gluste
KP,
One way to view relevant information from the core would be as follow,
1) before loading the core you need to tell gdb to look for shared objects in a
different path,
- using 'set solib-search-path'
- For us this translates to 'set solib-search-path
./lib:./lib/glusterfs:./lib/glusterfs/3.
You maybe looking at the problem being fixed here, [1].
On a lookup attribute mismatch was not being healed across directories, and
this patch attempts to address the same. Currently the version of the patch
does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy
enough
Awesome! Congratulations folks.
Shyam
On Mon, May 5, 2014 at 7:15 AM, Vijay Bellur wrote:
> On 04/30/2014 09:28 AM, Vijay Bellur wrote:
>
>>
>> We plan to perform update gerrit to provide access to sub-maintainers by
>> end of this week (i.e. 4th May). If you have any objections, concerns or
>>
15 matches
Mail list logo