Re: [Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-13 Thread Prasanna Kumar Kalever
On 16-12-14 07:43:05, Niels de Vos wrote:
> On Fri, Dec 09, 2016 at 11:28:52AM +0530, Prasanna Kalever wrote:
> > Hi all,
> > 
> > As we know gluster block storage creation and maintanace is not simple
> > today, as it involves all the manual steps mentioned at [1]
> > To make this basic operations simple we would like to integrate the
> > block story with gluster CLI.
> > 
> > As part of it, we would like Introduce the following commands
> > 
> > # gluster block create 
> > # gluster block modify   
> > # gluster block list
> > # gluster block delete 
> 
> I am not sure why this needs to be done through the Gluster CLI.
> Creating a file on a (how to select?) volume, and then export that as a
> block device through tcmu-runner (iSCSI) seems more like a task similar
> to what libvirt does with VM images.

May be not exactly, but similar

> 
> Would it not be more suitable to make this part of whatever tcmu admin
> tools are available? I assume tcmu needs to address this, with similar
> configuration options for LVM and other backends too. Building on top of
> that may give users of tcmu a better experience.

s/tcmu/tcmu-runner/

I don't think there are separate tools/utils for tcmu-runner as of now.
Also currently we are using tcmu-runner to export the file in the
gluster volume as a iSCSI block device, in the future we may move to
qemu-tcmu (which does the same job of tcmu-runner, except it uses
qemu gluster driver) for benefits like snapshots ?

Also configuring and running tcmu-runner on each node in the cluster
for multipathing is something not easy (take the case where we have
more than a dozen of node). If we can do these via gluster CLI with
one simple command from any node, we can configure and run tcmu-runner
on all the nodes.

> 
> If you can add such a consideration in the feature page, I'd appreciate
> it. Maybe other approaches have been discussed earlier as well? In that
> case, those approaches should probably be added too.

Sure!

--
Prasanna

> 
> Thanks,
> Niels
> 
> 
> > 
> > 
> > [1]  
> > https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
> > 
> > 
> > Thanks,
> > --
> > Prasanna
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-13 Thread Niels de Vos
On Fri, Dec 09, 2016 at 11:28:52AM +0530, Prasanna Kalever wrote:
> Hi all,
> 
> As we know gluster block storage creation and maintanace is not simple
> today, as it involves all the manual steps mentioned at [1]
> To make this basic operations simple we would like to integrate the
> block story with gluster CLI.
> 
> As part of it, we would like Introduce the following commands
> 
> # gluster block create 
> # gluster block modify   
> # gluster block list
> # gluster block delete 

I am not sure why this needs to be done through the Gluster CLI.
Creating a file on a (how to select?) volume, and then export that as a
block device through tcmu-runner (iSCSI) seems more like a task similar
to what libvirt does with VM images.

Would it not be more suitable to make this part of whatever tcmu admin
tools are available? I assume tcmu needs to address this, with similar
configuration options for LVM and other backends too. Building on top of
that may give users of tcmu a better experience.

If you can add such a consideration in the feature page, I'd appreciate
it. Maybe other approaches have been discussed earlier as well? In that
case, those approaches should probably be added too.

Thanks,
Niels


> 
> 
> [1]  
> https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
> 
> 
> Thanks,
> --
> Prasanna
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.10 feature proposal : Volume expansion on tiered volumes.

2016-12-13 Thread Hari Gowtham
Hi,

As per the discussion, we will have tier as a service in for 3.10 and the rest 
of 
the work such as add-brick, rebalance support and remove brick will be 
continued 
in the mainline.

Each will be implemented one after the other.

- Original Message -
> From: "Dan Lambright" 
> To: "Shyam" 
> Cc: "Hari Gowtham" , "gluster-devel" 
> 
> Sent: Thursday, December 8, 2016 11:54:33 PM
> Subject: Re: [Gluster-devel] Release 3.10 feature proposal : Volume expansion 
> on tiered volumes.
> 
> 
> 
> - Original Message -
> > From: "Shyam" 
> > To: "Hari Gowtham" , "gluster-devel"
> > 
> > Sent: Thursday, December 8, 2016 7:35:27 AM
> > Subject: Re: [Gluster-devel] Release 3.10 feature proposal : Volume
> > expansion on tiered volumes.
> > 
> > Hi Hari,
> > 
> > Thanks for posting this issue to be considered part of 3.10.
> > 
> > I have a few questions inline.
> > 
> > Shyam
> > 
> > On 12/08/2016 01:23 AM, Hari Gowtham wrote:
> > > Hi,
> > >
> > > To support add/remove brick on tiered volumes we are planing to separate
> > > the tier into a separate process in the service framework and add the
> > > add/remove brick support. Later the users will be able to spawn rebalance
> > > on tiered volumes (which is not possible).
> > 
> > I assume tier as a separate process is from the rebalance deamon
> > perspective, right? Or, is it about separating the xlator cod efrom DHT?
> > 
> > Also, Dan would like your comments as Tier maintainer, on the maturity
> > of the below proposal for 3.10 inclusion? Could you also add the
> > required labels [2] to the issue as you see fit, and if this passes your
> > inspection, then let us know and I can mark it for 3.10 milestone in
> > github.
> 
> The first part of this project "tier as a service" can probably get into
> 3.10. I will discuss a bit more with Hari and the glusterd team to confirm
> the entire feature will make it.
> 
> > 
> > >
> > > The following are the steps planed to be performed:
> > >
> > > *) tier as a service (final stages of code review)
> > 
> > Can we get links to the code, and also the design spec if available, for
> > the above (and possibly as a whole)
> > 
> > > *) we are separating the attach tier from add brick and detach from
> > >remove brick.
> > > *) infra to support add/remove brick.
> > > *) rebalance process on a tiered volume.
> > > *) a few patches to take care of the issues that will be arising
> > >eg: while adding a brick on a tiered volume, the tier process has to
> > >be stopped as the graph switch occurs. and other issues like this.
> > >
> > > The whole volume expansion will be in an experimental state. while the
> > > separation of tier into a separate service framework and attach/detach
> > > tier separation from add/remove brick should be back to stable state
> > > before
> > > the release of 3.10
> > 
> > What is the mitigation plan in case this does not get stable? Would you
> > have all commits in ready but not merged state till it is stable?
> > 
> > This looks like a big change, and also something that has been going on
> > for some time now, based on your comments above.
> > 
> > >
> > > [1] https://github.com/gluster/glusterfs/issues/54
> > [2] https://github.com/gluster/glusterfs/labels
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> 

-- 
Regards, 
Hari. 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-13 Thread Prasanna Kalever
On Mon, Dec 12, 2016 at 11:57 PM, Shyam  wrote:
> Prasanna,
>
> When can the design be ready for review? I ask this as feature completion
> for 3.10 is slated around 17th Jan, 2017.

Shyam, I am currently working on libvirt bug in its gluster driver.
Hopefully will wind that up at most by end of the week. Hence will be
able to update the design doc sometime early next week.

>
> Based on the above, it would be good to close the design reviews by end of
> Dec (or very early Jan), so that we just deal with code later.

Sure. I'll definitely align to the plan.

Thanks,
--
Prasanna

>
> Let us know your plans, and what help you may need.
>
> Thanks,
> Shyam
>
>
> On 12/09/2016 03:28 AM, Prasanna Kalever wrote:
>>
>> On Fri, Dec 9, 2016 at 1:28 PM, Atin Mukherjee 
>> wrote:
>>>
>>> I'd like to see more details around the feature page about it. Currently
>>> it
>>> just merely talks about the CLI semantics and nothing else.
>>
>>
>> Sorry, but please do not expect much about the design details now, as
>> it is still not done with design phase. Once the block storage team
>> sticks on the design, one of us will definitely update all the
>> required information.
>>
>> --
>> Prasanna
>>
>>>
>>> On Fri, Dec 9, 2016 at 12:38 PM, Prasanna Kalever 
>>> wrote:


 Feature Page at https://github.com/gluster/glusterfs-specs/pull/10

 --
 Prasanna

 On Fri, Dec 9, 2016 at 11:28 AM, Prasanna Kalever 
 wrote:
>
> Hi all,
>
> As we know gluster block storage creation and maintanace is not simple
> today, as it involves all the manual steps mentioned at [1]
> To make this basic operations simple we would like to integrate the
> block story with gluster CLI.
>
> As part of it, we would like Introduce the following commands
>
> # gluster block create 
> # gluster block modify   
> # gluster block list
> # gluster block delete 
>
>
> [1]
>
> https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
>
>
> Thanks,
> --
> Prasanna

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.10 feature proposal:: Statedump for libgfapi

2016-12-13 Thread Rajesh Joseph
On Mon, Dec 12, 2016 at 11:34 PM, Shyam  wrote:
> On 12/12/2016 12:26 AM, Niels de Vos wrote:
>>
>> On Fri, Dec 09, 2016 at 06:20:22PM +0530, Rajesh Joseph wrote:
>>>
>>> Gluster should have some provision to take statedump of gfapi
>>> applications.
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1169302
>>
>>
>> A part of this feature should be to find out how applications that use
>> libgfapi expect to trigger debugging like this. Doing a statedump from
>> the gluster-cli should not be the main/only option. I agree that it
>> helps developers that work on Gluster, but we can not expect users to
>> trigger statedumps like that.
>>
>> I think there would be a huge benefit in having an option to communicate
>> with libgfapi through some minimal form of local IPC. It will allow
>> doing statedumps, and maybe even set/get configuration options for
>> applications that do not offer these in their usage (yet).
>>
>> The communication should be as simple and stable as possible. This could
>> be the only working interface towards getting something done inside
>> gfapi (worst case scenario). There is no need to have this a full
>> featured interface, possibly a named pipe (fifo) where libgfapi is the
>> reader is sufficient. A simple (text) command written to it can create
>> statedumps and eventually other files on request.
>>
>> Enabling/disabling or even selecting the possibilities for debugging
>> could be confiured through new functions in libgfapi, and even
>> environment variables.
>>
>> What do others think? Would this be useful?
>
>
> Would this be useful, yes.
>
> Would this work in cases like a container deployment, where such debugging
> maybe sought at scale, maybe not. I prefer options here, and also the
> ability to drive this from the storage admin perspective, i.e the
> server/brick end of things, identifying the client/connection against which
> we need the statedump and get that information over.
>

We were thinking something on the same line. Where statedumps can be
initiated by
glusterd by an admin. The option mentioned by Niels is also helpful
but that means
we should either provide some tool or the application has to do some
amount of changes
to make use of this feature.

> I guess the answer here is, this should not be the only option, but we
> can/should have other options as you describe above.
>

Make sense.

Thanks & Regards,
Rajesh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 1402538 : Assertion failure during rebalance of symbolic links

2016-12-13 Thread Raghavendra Gowdappa


- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Ashish Pandey" 
> Cc: "Gluster Devel" , "Shyam Ranganathan" 
> , "Nithya Balachandran"
> , "Xavier Hernandez" , 
> "Raghavendra Gowdappa" 
> Sent: Tuesday, December 13, 2016 9:29:46 PM
> Subject: Re: 1402538 : Assertion failure during rebalance of symbolic links
> 
> On Tue, Dec 13, 2016 at 2:45 PM, Ashish Pandey  wrote:
> 
> > Hi All,
> >
> > We have been seeing an issue where re balancing symbolic links leads to an
> > assertion failure in EC volume.
> >
> > The root cause of this is that while migrating symbolic links to other sub
> > volume, it creates a link file (with attributes .T) .
> > This file is a regular file.
> > Now, during migration a setattr comes to this link and because of possible
> > race, posix_stat return stats of this "T" file.
> > In ec_manager_seattr, we receive callbacks and check the type of entry. If
> > it is a regular file we try to get size and if it is not there, we raise an
> > assert.
> > So, basically we are checking a size of the link (which will not have
> > size) which has been returned as regular file and we are ending up when
> > this condition
> > becomes TRUE.
> >
> > Now, this looks like a problem with re balance and difficult to fix at
> > this point (as per the discussion).
> > We have an alternative to fix it in EC but that will be more like a hack
> > than an actual fix. We should not modify EC
> > to deal with an individual issue which is in other translator.

I am afraid, dht doesn't have a better way of handling this. While DHT 
maintains abstraction (of a symbolic link) to layers above, the layers below it 
cannot be shielded from seeing the details like a linkto file etc. If the 
concern really is that the file is changing its type in a span of single fop, 
we can probably explore the option of locking (or other synchronization 
mechanisms) to prevent migration taking place, while a fop is in progress. But, 
I assume there will be performance penalties for that too.

> >
> > Now the question is how to proceed with this? Any suggestions?
> >
> 
> Raghavendra/Nithya,
>  Could one of you explain the difficulties in fixing this issue in
> DHT so that Xavi will also be caught up with why we should add this change
> in EC in the short term.
> 
> 
> >
> > Details on this bug can be found here -
> > https://bugzilla.redhat.com/show_bug.cgi?id=1402538
> >
> > 
> > Ashish
> >
> >
> >
> >
> 
> 
> --
> Pranith
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.10 feature proposal: gfapi fix memory leak during graph switch

2016-12-13 Thread Rajesh Joseph
On Mon, Dec 12, 2016 at 11:42 PM, Shyam  wrote:
> On 12/09/2016 08:06 AM, Rajesh Joseph wrote:
>>
>> During graph switch we do not perform much cleanup operation on the
>> old graph, leading to memory leak.
>>
>> + Inode table of old graph needs cleanup.
>>- Fix inode leaks
>>- Fix forget of each xl to free inode ctx properly
>> + The xl objects itself (xlator_t)
>> + The mem_accnt structure in every xl object.
>>   - Fix all the leaks so that the ref count of mem_accnt structure is
>> 0
>> + Implement fini() in every xlator
>>
>> Would need support from all other xlator owner to bring down memory
>> leak in each xlator.
>
>
> Rajesh, I have marked this for 3.10 tentatively. As you call out a per
> xlator task, could you elaborate what is needed from others?


Basically we need proper fini implementation in all xlators. That means all
xlators must free up all resources allocated from init to fini.

>
> We need to possibly get ack's from the various component maintainers so that
> this can happen for 3.10. So if you could share a list of things that need
> to be done/checked for each component and drive the same, in terms of
> timelines, we could get this feature complete.
>

We have most of the big ticket items but each xlators must come up with their
list of leaks. May be for this release we can target only most frequently used
xlators. I will update the list on the github project planning card.

Thanks & Regards,
Rajesh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Question about EC locking

2016-12-13 Thread jayakrishnan mm
Thanks Xavier, for making it clear.
Regards
JK

On Dec 13, 2016 3:52 PM, "Xavier Hernandez"  wrote:

Hi JK,


On 12/13/2016 08:34 AM, jayakrishnan mm wrote:

> Dear Xavi,
>
> How do I test  the locks, for example locks  for write fop. I have two
> clients(independent), both  are  trying to write to same file.
>
>
> 1. According to my understanding, both  can successfully write  if the
> offsets don't overlap . I mean, the WRITE FOP  takes a chunk lock on the
> file . As
> long as the clients don't try  to write to the same chunk, it should be
> OK. If no locks  present, it can lead to inconsistency.
>

With locks all writes will be fine as defined by posix (i.e. the final
result will be equivalent to the sequential execution of both operations,
though in an undefined order), even if they overlap. Without locks, there
are chances that some bricks execute the operations in one order and the
remaining bricks execute the same operations in the reverse order, causing
data corruption.



>
> 2.  Different FOPs can always run simultaneously. (Example  WRITE  and
> READ FOPs, or  two READ FOPs).
>

All fops can be executed concurrently. If there's any chance that two
operations could interfere, locks are taken in the appropriate places. For
example, reads cannot be merged with overlapping writes. Otherwise they
could return inconsistent data.



> 3. WRITE & some metadata FOP (like setattr)  together . Cannot happen
> together with locks , even though chances  are very low.
>

As in 2, if there's any possible interference, the appropriate locks will
be taken.

You can look at the code to see which locks are taken for each fop. See the
corresponding ec_manager_() function, in the EC_STATE_LOCK switch
case. There you will see calls to ec_lock_prepare_xxx() for each taken lock.

Xavi


> Pls. clarify.
>
> Best regards
> JK
>
>
>
> On Wed, Nov 30, 2016 at 5:49 PM, jayakrishnan mm
> mailto:jayakrishnan...@gmail.com>> wrote:
>
> Hi Xavier,
>
> Thank you very much for your explanation. This helped  me to
> understand  more  about  locking in EC.
>
> Best Regards
> JK
>
>
> On Mon, Nov 28, 2016 at 4:17 PM, Xavier Hernandez
> mailto:xhernan...@datalab.es>> wrote:
>
> Hi,
>
> On 11/28/2016 02:59 AM, jayakrishnan mm wrote:
>
> Hi Xavier,
>
> Notice  that EC xlator uses blocking locks. Any specific
> reason for this?
>
>
> In a distributed filesystem like gluster a synchronization
> mechanism is a must to avoid data corruption.
>
>
> Do you think this will  affect the  performance ?
>
>
> Of course the need for locks has a performance impact, and we
> cannot avoid them to guarantee data integrity. However some
> optimizations have been applied, specially the eager locking
> which allows a lock to be reused without unlocking/locking again.
>
>
> (In comparison AFR  first tries  non blocking locks  and if not
> successful, tries blocking locks then)
>
>
> EC also tries a non-blocking lock first.
>
>
> Also, why two locks  are  needed  per FOP ? One for normal
> I/O and
> another for self healing?
>
>
> The only fop that currently needs two locks is 'rename', and
> only when source and destination directories are different. All
> other fops only take one lock at most.
>
> Best regards,
>
> Xavi
>
>
> Best regards
> JK
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
>
>
>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] semantics about OPEN in NFS

2016-12-13 Thread jin deng
hello,
After reading the "nfs" xlator i'm doing some statistics work with
"debug/io-stats" and find that only the open/create/mkdir will do the "
*ios_inode_ctx_set*" means other operations such readv(not created during
the lifetime of this glusterfs process) will not be calculated for the
throughput and etc.so what's the purpose of only
set the ctx in open/create/mkdir.
By the way,at the mention of "open",this is not a semantics which
defined by NFS rfc,while I saw many xlator do
define such a operation but the "nfs" xlator hasn't call any "open"
operation of subvolumes.I'm wondering when and
where the open will be called?
   Thanks.


-- 
Sincerely,
DengJin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 1402538 : Assertion failure during rebalance of symbolic links

2016-12-13 Thread Pranith Kumar Karampuri
On Tue, Dec 13, 2016 at 2:45 PM, Ashish Pandey  wrote:

> Hi All,
>
> We have been seeing an issue where re balancing symbolic links leads to an
> assertion failure in EC volume.
>
> The root cause of this is that while migrating symbolic links to other sub
> volume, it creates a link file (with attributes .T) .
> This file is a regular file.
> Now, during migration a setattr comes to this link and because of possible
> race, posix_stat return stats of this "T" file.
> In ec_manager_seattr, we receive callbacks and check the type of entry. If
> it is a regular file we try to get size and if it is not there, we raise an
> assert.
> So, basically we are checking a size of the link (which will not have
> size) which has been returned as regular file and we are ending up when
> this condition
> becomes TRUE.
>
> Now, this looks like a problem with re balance and difficult to fix at
> this point (as per the discussion).
> We have an alternative to fix it in EC but that will be more like a hack
> than an actual fix. We should not modify EC
> to deal with an individual issue which is in other translator.
>
> Now the question is how to proceed with this? Any suggestions?
>

Raghavendra/Nithya,
 Could one of you explain the difficulties in fixing this issue in
DHT so that Xavi will also be caught up with why we should add this change
in EC in the short term.


>
> Details on this bug can be found here -
> https://bugzilla.redhat.com/show_bug.cgi?id=1402538
>
> 
> Ashish
>
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Feature proposal for 3.10 release: Support to retrieve maximum supported op-version

2016-12-13 Thread Samikshan Bairagya



On 12/12/2016 11:52 PM, Shyam wrote:

Samikshan,

Request that a spec page be opened for the same, so that reviews and
discussions can happen against that.



Hi. I have opened a related spec page here: 
http://review.gluster.org/#/c/16118/. Reviews and feedback are welcome.


Thanks,
Samikshan


Thanks,
Shyam

On 12/08/2016 12:22 PM, Samikshan Bairagya wrote:

Hi,

Currently there is no way to know the maximum op-version that is
supported in a heterogeneous cluster. If this is made possible, it would
prove helpful to users wrt knowing the maximum op-version to which the
cluster could be bumped up to.

The minimum of the maximum op-versions that is supported by each node
returns the maximum op-version the cluster can support. The idea is to
use the gluster volume get interface as follows to retrieve this value:

# gluster volume get all cluster.max-op-version

I feel this could be an useful feature to include in the 3.10 release
and a related issue [1] is open on Github for the same.

[1] https://github.com/gluster/glusterfs/issues/56

Thanks and Regards,

Samikshan
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] MINUTES: Gluster Community Bug Triage meeting Dec 12th

2016-12-13 Thread Hari Gowtham
 Roll call  (hgowtham, 12:00:08)
  * agenda: https://public.pad.fsfe.org/p/gluster-bug-triage  (hgowtham,
12:00:15)

* Next week’s meeting host  (hgowtham, 12:03:10)
  * ankitraj to host the bug triage on 20th December  (hgowtham,
12:07:25)

* Action items  (hgowtham, 12:07:53)

* jiffin will try to add an error for bug ownership to check-bugs.py
  (hgowtham, 12:08:07)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(hgowtham, 12:08:29)

* Group Triage  (hgowtham, 12:11:08)
  * http://www.gluster.org/community/documentation/index.php/Bug_triage
(hgowtham, 12:11:26)
  * ACTION: jiffin will try to add an error for bug ownership to
check-bugs.py  (hgowtham, 12:11:41)

* Open Floor  (hgowtham, 12:21:22)

Meeting ended at 12:23:44 UTC.




Action Items

* ndevos need to decide on how to provide/use debug builds
* jiffin will try to add an error for bug ownership to check-bugs.py




Action Items, by person
---
* jiffin
  * jiffin will try to add an error for bug ownership to check-bugs.py
* **UNASSIGNED**
  * ndevos need to decide on how to provide/use debug builds




People Present (lines said)
---
* hgowtham (35)
* jiffin (5)
* ankitraj (4)
* kkeithley (4)
* zodbot (3)
* Saravanakmr (3)
* sanoj (1)



- Forwarded Message -
> From: "Hari Gowtham" 
> To: gluster-devel@gluster.org
> Sent: Tuesday, December 13, 2016 4:31:22 PM
> Subject: REMINDER: Gluster Community Bug Triage meeting (Today)
> 
> Hi all,
> 
> The weekly Gluster bug triage is about to take place in one hour.
> 
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?channels=gluster-meeting )
> - date: every Tuesday
> - time: 12:00 UTC
> (in your terminal, run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
> 
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * Group Triage
> * Open Floor
> 
> Appreciate your participation.
> 
> --
> Regards,
> Hari.
> 
> 

-- 
Regards, 
Hari. 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.10 feature proposal: Disable creation of trash directory by default

2016-12-13 Thread Shyam

On 12/13/2016 04:30 AM, Anoop C S wrote:

On Mon, 2016-12-12 at 11:44 -0500, Shyam wrote:

Anoop,

I have summarily marked this for 3.10, but have a few requests here,

1) Can we open a spec for this?


We do have an accepted design spec under glusterfs-specs namely 'Trash 
Improvements'[2] among which
this particular proposal is already listed and detailed. Keeping that in mind 
we thought of not to
extract part of it and bring up another design doc. Would that be enough or do 
we really need an
exclusive spec file?


No, reusing the one below is sufficient.



[2] 
https://github.com/gluster/glusterfs-specs/blob/master/accepted/Trash-Improvements.md


2) We possibly need to understand backward compatibility issues/concerns
if any? I.e existing volumes would already have created .trashcan etc.
and how this change impacts those volumes.


It is backward compatible in the sense that it does not break the promise given 
to consumers of
trash feature before upgrading to newer version. Moreover we are making it 
easier for those users
who doesn't require this feature on existing volumes to have an option for 
deleting the trash
directory if not needed. And of course for new volumes we no longer creates 
trash directory by
default which makes it more convenient for end users.

Being said that we will have to document this change of behavior in release 
notes for sure.



Thanks,
Shyam

On 12/10/2016 10:02 AM, Anoop C S wrote:

Hi all,

As per the current design trash directory, namely .trashcan, will be created at 
the root when
bricks associated with a volume become online and there is a restriction to 
delete this
directory from the volume even when trash feature is disabled.

This proposal is targeted in a such a way that creation and subsequent 
enforcement on trash
directory to happen only when feature is enabled for that volume.

Issue opened at [1].

[1] https://github.com/gluster/glusterfs/issues/65

Thanks,
--Anoop C S


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting (Today)

2016-12-13 Thread Hari Gowtham
Hi all,

The weekly Gluster bug triage is about to take place in one hour.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC  
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

Appreciate your participation.

-- 
Regards, 
Hari. 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for 3.9 patches

2016-12-13 Thread Poornima Gurusiddaiah
Hi, 

Below are some of the backported patches that are important for 3.9, please 
review the same: 

http://review.gluster.org/#/c/15890/ (afr,dht,ec: Replace 
GF_EVENT_CHILD_MODIFIED with event SOME_DESCENDENT_DOWN/UP) 
http://review.gluster.org/#/c/15933/ , http://review.gluster.org/#/c/15935/ 
(libglusterfs: Fix a read hang) 
http://review.gluster.org/#/c/15959/ (afr: Fix the EIO that can occur in 
afr_inode_refresh as a result) 
http://review.gluster.org/#/c/15960/ (tests: Fix one of the md-cache test 
cases) 
http://review.gluster.org/#/c/16022/ (dht/md-cache: Filter invalidate if the 
file is made a linkto file) 

Thank You. 

Regards, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.10 feature proposal: Disable creation of trash directory by default

2016-12-13 Thread Anoop C S
On Mon, 2016-12-12 at 11:44 -0500, Shyam wrote:
> Anoop,
> 
> I have summarily marked this for 3.10, but have a few requests here,
> 
> 1) Can we open a spec for this?

We do have an accepted design spec under glusterfs-specs namely 'Trash 
Improvements'[2] among which
this particular proposal is already listed and detailed. Keeping that in mind 
we thought of not to
extract part of it and bring up another design doc. Would that be enough or do 
we really need an
exclusive spec file?

[2] 
https://github.com/gluster/glusterfs-specs/blob/master/accepted/Trash-Improvements.md

> 2) We possibly need to understand backward compatibility issues/concerns 
> if any? I.e existing volumes would already have created .trashcan etc. 
> and how this change impacts those volumes.

It is backward compatible in the sense that it does not break the promise given 
to consumers of
trash feature before upgrading to newer version. Moreover we are making it 
easier for those users
who doesn't require this feature on existing volumes to have an option for 
deleting the trash
directory if not needed. And of course for new volumes we no longer creates 
trash directory by
default which makes it more convenient for end users.

Being said that we will have to document this change of behavior in release 
notes for sure.

> 
> Thanks,
> Shyam
> 
> On 12/10/2016 10:02 AM, Anoop C S wrote:
> > Hi all,
> > 
> > As per the current design trash directory, namely .trashcan, will be 
> > created at the root when
> > bricks associated with a volume become online and there is a restriction to 
> > delete this
> > directory from the volume even when trash feature is disabled.
> > 
> > This proposal is targeted in a such a way that creation and subsequent 
> > enforcement on trash
> > directory to happen only when feature is enabled for that volume.
> > 
> > Issue opened at [1].
> > 
> > [1] https://github.com/gluster/glusterfs/issues/65
> > 
> > Thanks,
> > --Anoop C S
> > 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] 1402538 : Assertion failure during rebalance of symbolic links

2016-12-13 Thread Ashish Pandey
Hi All, 

We have been seeing an issue where re balancing symbolic links leads to an 
assertion failure in EC volume. 

The root cause of this is that while migrating symbolic links to other sub 
volume, it creates a link file (with attributes .T) . 
This file is a regular file. 
Now, during migration a setattr comes to this link and because of possible 
race, posix_stat return stats of this "T" file. 
In ec_manager_seattr, we receive callbacks and check the type of entry. If it 
is a regular file we try to get size and if it is not there, we raise an 
assert. 
So, basically we are checking a size of the link (which will not have size) 
which has been returned as regular file and we are ending up when this 
condition 
becomes TRUE. 

Now, this looks like a problem with re balance and difficult to fix at this 
point (as per the discussion). 
We have an alternative to fix it in EC but that will be more like a hack than 
an actual fix. We should not modify EC 
to deal with an individual issue which is in other translator. 

Now the question is how to proceed with this? Any suggestions? 

Details on this bug can be found here - 
https://bugzilla.redhat.com/show_bug.cgi?id=1402538 

 
Ashish 



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel