not for non-debug setups, but if a problem is
reproducible this could be a quick way to check who is not releasing the
ref's or have a history of the refs and unrefs to dig better into code.
Shyam
___
Gluster-devel mailing list
Gluster-devel
: Ic22c76fe76f0d91300daff36e755a18a8db58852
Signed-off-by: Jeff Darcy jda...@redhat.com
Reviewed-on: http://review.gluster.org/
Tested-by: Gluster Build System jenk...@build.gluster.com
Reviewed-by: Vijay Bellur vbel...@redhat.com
Shyam
On 10/13/2014 10:08 AM, Pranith Kumar Karampuri wrote:
On 10/13/2014 07:27 PM, Shyam wrote:
On 10/13/2014 08:01 AM, Pranith Kumar Karampuri wrote:
hi,
Why are we moving away from this coding style?:
if (x) {
/*code*/
} else {
/* code */
}
This patch (in master) introduces the same
- 1284 = 147 references to the
other (non-kernel?) style.
Jut tried removing the 'wc -l' pipe and saw the same. So currently based
on this method the metric for } else { should be the preferred one
(just stating), or I am doing something wrong at my end :)
Shyam
-name '*.c' | xargs grep else | grep -v '}' | wc -l
1733
Shyam
On 10/13/2014 04:52 PM, Joe Julian wrote:
Not taking sides, though if I were I would support the kernel style
because I, personally, find it easier to read. Just to clarify the point:
$ find -name '*.c' | xargs grep '} else {' | wc -l
and register the NFS service. (this is basically the showmount
only, but better to add it to the script)
Or IOW,
Add, EXPECT_WITHIN $NFS_EXPORT_TIMEOUT 1 is_nfs_export_available
Before, the mount, just to be sure.
Shyam
___
Gluster-devel mailing list
Gluster
.
There was a mention of writing a feature page for this enhancement, I
would suggest doing that, even if premature, so that details are better
elaborated and understood (by me at least).
HTH,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http
request? Meaning, you want
/myfile, /dir1/myfile and /dir2/dir3/myfile to fall onto the same
bricks/subvolumes and that perchance is what you are looking for?
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org
On 11/10/2014 09:59 AM, Xavier Hernandez wrote:
Hi Shyam,
On 11/10/2014 03:36 PM, Shyam wrote:
Hi Xavi,
I think you are referring to the problem as I see it, and have posted
here, http://review.gluster.org/#/c/6459/1/libglusterfs/src/timer.c
If so, there is nothing clean to handle the case
On 11/11/2014 06:40 AM, Rudra Siva wrote:
Responses inline ... (removed some of my older parts of post).
On Mon, Nov 10, 2014 at 2:11 PM, Shyam srang...@redhat.com wrote:
On 11/01/2014 10:20 AM, Rudra Siva wrote:
The response below is based on, reading into this mail and the other mail
. Is
this a test run from an NFS mount point?
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
On 11/17/2014 10:50 AM, Xavier Hernandez wrote:
Hi Shyam,
On 11/17/2014 03:50 PM, Shyam wrote:
On 11/17/2014 07:20 AM, Emmanuel Dreyfus wrote:
Hello
I have an almost reliable test that fails on NetBSD:
./tests/basic/ec/quota.t (Wstat: 0 Tests: 22 Failed: 3)
Failed tests
among other things)
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
is this functionality needed?
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
in the code to enable what you seek at present.
Why is this needed?
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
On 11/25/2014 05:03 PM, Anand Avati wrote:
On Tue Nov 25 2014 at 1:28:59 PM Shyam srang...@redhat.com
mailto:srang...@redhat.com wrote:
On 11/12/2014 01:55 AM, Anand Avati wrote:
On Tue, Nov 11, 2014 at 1:56 PM, Jeff Darcy jda...@redhat.com
mailto:jda...@redhat.com
On 11/26/2014 08:19 AM, Mohammed Rafi K C wrote:
Hi All,
We are planning to change the volume status command to show RDMA port for
tcp,rdma volumes. We have four output designs in mind , those are,
1)Modify Port column as TCP,RDMA Ports
Eg:
Status of volume: xcube
Gluster process
and
compared to its predecessor.
Any other thoughts/suggestions?
Maybe a brief example of how this works would help clarify some thoughts.
Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo
the state
(or at least a bulk get/set for notification requests).
Shyam
On 12/12/2014 02:17 AM, Soumya Koduri wrote:
Hi,
This framework has been designed to maintain state in the glusterfsd
process, for each the files being accessed (including the clients info
accessing those files) and send
and also NFS head failover.
- Tolerant to seekdir() to arbitrary locations.
But, would provide a more reliable readdir offset for use (when valid
and not evicted, say).
How would NFS adapt to this? Does Ganesha need a better scheme when
doing multi-head NFS fail over?
Thoughts?
Shyam
[1] http
On 12/16/2014 03:21 PM, J. Bruce Fields wrote:
On Tue, Dec 16, 2014 at 11:46:46AM -0500, Shyam wrote:
On 12/15/2014 09:06 PM, Anand Avati wrote:
Replies inline
On Mon Dec 15 2014 at 12:46:41 PM Shyam srang...@redhat.com
mailto:srang...@redhat.com wrote:
With the changes present in [1
(2014)).
Xavi
On 12/16/2014 03:06 AM, Anand Avati wrote:
Replies inline
On Mon Dec 15 2014 at 12:46:41 PM Shyam srang...@redhat.com
mailto:srang...@redhat.com wrote:
With the changes present in [1] and [2],
A short explanation of the change would be, we encode the subvol
ID
On 12/17/2014 02:15 AM, Raghavendra G wrote:
On Wed, Dec 17, 2014 at 1:25 AM, Shyam srang...@redhat.com
mailto:srang...@redhat.com wrote:
This mail intends to present the lock migration across subvolumes
problem and seek solutions/thoughts around the same, so any
feedback
light on this and on the current status of epoll on
NetBSD.
Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
the
ability to load balance readdir requests across its subvols better, than
have a static subvol to send to for a longer duration.
Thoughts/comments?
Shyam
[1] https://www.mail-archive.com/gluster-devel@gluster.org/msg02834.html
[2] review.gluster.org/#/c/8201/4/xlators/cluster/afr/src/afr-dir
On 01/23/2015 03:12 PM, Emmanuel Dreyfus wrote:
Shyam srang...@redhat.com wrote:
Patch: http://review.gluster.org/#/c/3842/
Manu,
I was not able to find the NetBSD job mentioned in the last review
comment provided by you, pointers to that would help.
Yes, sorry, both regression tests hang
Updated DHT2 witha couple of links in the Status section,
http://www.gluster.org/community/documentation/index.php/Features/dht-scalability#Status
Thanks,
Shyam
On 01/26/2015 07:53 AM, Jeff Darcy wrote:
Ahead of Wednesday's community meeting, I'd like to get as much 4.0
status together
values post a graph switch and in this case it would be a fresh
opendir and then seeking to the d_off provided (with all the subvol ID
decoding etc.).
So in short, we are not immune to this.
Shyam
___
Gluster-devel mailing list
Gluster-devel
take
some time. Would it be better to remove the test in the meantime ?
I am checking if this is reproducible on my machine, so that I can
possibly see what is going wrong.
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http
.
Shyam
On 02/17/2015 04:50 PM, David F. Robinson wrote:
Any updates on this issue? Thanks in advance...
David
-- Original Message --
From: Shyam srang...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com; Justin Clift
jus...@gluster.org
Cc: Gluster Devel gluster-devel
the
exact scenario.
I've been doing some tests with Shyam and it seems that the root cause is the
edge-triggered epoll introduced in the multi-threaded epoll patch. It has a
side effect that makes the outstanding-rpc-limit option near to useless and
gluster gets overflowed of requests, causing
this is allowed to be
configured (among others) etc. Request reviewers to take a look at this.
Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
An update: inline
On 01/27/2015 10:14 AM, Shyam wrote:
Hi,
Here is the current state of multi-thread epoll patch,
1) Patches:
epoll patch: http://review.gluster.org/#/c/3842/
epoll configuration patch: http://review.gluster.org/#/c/9488/
2) Failures:
- epoll patch
- In 2 runs
and submitted the patch, lets see how this one
fares. Ran it 20 times on NetBSD and on Linux with poll and poll/epoll
respectively, and as the issue was reproducible consistently, I assume
this patch should get things right on NetBSD.
Shyam
___
Gluster-devel
basename to a child GFID.
Non-directories exist only on one subvolume - the one selected by
consistent hashing on its GFID at the time it was created or rebalanced...
/sneak peek
Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
will step up
the priority on this so that we have a clean fix that can be used to
prevent this in the future.
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 02/12/2015 11:18 AM, David F. Robinson wrote:
Shyam,
You asked me to stop/start the slow volume to see if it fixed the timing
issue. I stopped/started homegfs_backup (the production volume with 40+
TB) and it didn't make it faster. I didn't stop/start the fast volume
to see if it made
On 03/31/2015 08:33 AM, Justin Clift wrote:
Hi all,
Ran 20 x regression test jobs on (severely resource
constrained) 1GB Rackspace VM's last night (in addition to the
20x normal VM's ones also run).
The 1GB VM's have much much slower disk, only one virtual CPU,
and 1/2 the RAM of our standard
. i.e same as
https://bugzilla.redhat.com/show_bug.cgi?id=1195415
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 02/23/2015 01:58 PM, Justin Clift wrote:
Short version:
75% of the Jenkins regression tests we run in Rackspace (on
glusterfs master branch) fail from spurious errors.
This is why we're having capacity problems with our Jenkins
slave nodes... we need to run our tests 4x for each CR just
to
On 02/23/2015 11:20 PM, Justin Clift wrote:
On 23 Feb 2015, at 19:32, Shyam srang...@redhat.com wrote:
snip
4 of the regression runs also created coredumps. Uploaded the
archived_builds and logs here:
http://mirror.salasaga.org/gluster/
(are those useful?)
Yes, these are useful
is for the rebalance process to migrate the locks,
with some additional coordination with the locks/lease/upcall xlators.
The problem however is _mapping_ all of the lock information across the
2 different storage node brick processes. (i.e client_t information)
Shyam
On 04/20/2015 02:48 PM
usage would need better scrutiny from
component maintainers in general.
HTH,
Shyam
[1] http://www.gluster.org/pipermail/gluster-devel/2013-December/038077.html
[2]
http://www.gluster.org/community/documentation/index.php/Features/better-logging
On 04/29/2015 08:57 AM, Benjamin Turner wrote:
Doh my mistake, I thought it was merged. I was just running with the
upstream 3.7 daily. Can I use this run as my baseline and then I can
run next time on the patch to show the % improvement? I'll wipe
everything and try on the patch, any idea
On 04/28/2015 01:10 PM, Niels de Vos wrote:
On Tue, Apr 28, 2015 at 05:56:56PM +0100, Justin Clift wrote:
This sounds like it might be useful for us:
https://gerrit-documentation.storage.googleapis.com/Documentation/2.9.4/config-labels.html#label_copyAllScoresOnTrivialRebase
and rebalance
phase 2, which requires a _wait/sleep_ in this state to be injected in
the rebalance daemon.
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd co-maintainer to 'maintain'
tests that are added under tests/bugs/glusterd. Most of them don't
On 05/18/2015 03:49 PM, Shyam wrote:
On 05/18/2015 10:33 AM, Vijay Bellur wrote:
The etherpad did not call out, ./tests/bugs/distribute/bug-1161156.t
which did not have an owner, and so I took a stab at it and below are
the results.
I also think failure in ./tests/bugs/quota/bug-1038598.t
On 05/18/2015 07:05 PM, Shyam wrote:
On 05/18/2015 03:49 PM, Shyam wrote:
On 05/18/2015 10:33 AM, Vijay Bellur wrote:
The etherpad did not call out, ./tests/bugs/distribute/bug-1161156.t
which did not have an owner, and so I took a stab at it and below are
the results.
I also think failure
On 04/01/2015 02:47 PM, Jeff Darcy wrote:
When doing an initial burn in test (regression run on master head
of GlusterFS git), it coredumped on the new slave23.cloud.gluster.org VM.
(yeah, I'm reusing VM names)
http://build.gluster.org/job/regression-test-burn-in/16/console
Does anyone have
writes are
indeed the culprits. We are trying to reproduce the issue locally.
@Shyam, it would be helpful if you can confirm the hypothesis :).
Ummm... I thought we acknowledge that quota checks are done during the
WIND and updated during UNWIND, and we have io threads doing in flight
IOs
KP,
On 06/06/2015 08:44 AM, Krishnan Parthasarathi wrote:
Has anyone see this test fail for them? This tests passes on
my laptop which leads me to believe that the test case or the underlying
issue could be non-deterministic.
I had posted this earlier (4th June to this list, just so you can
http://review.gluster.org/#/c/10967/ request is the one that has these
changes.
Doing a final review and merging the same.
Shyam
On 06/04/2015 12:22 PM, Kaleb KEITHLEY wrote:
Recent commits to xlators/cluster/dht/src/dht-common.c calls functions
which are not defined.
Was a file
On 06/04/2015 12:26 PM, Shyam wrote:
http://review.gluster.org/#/c/10967/ request is the one that has these
changes.
This is now merged and the compile issue should be resolved.
Patches affected by this would need to be rebased.
(list that I see that have already failed)
- http
On 06/19/2015 09:33 AM, Shyam wrote:
On 06/19/2015 12:35 AM, Atin Mukherjee wrote:
One of my patch failed NetBSD regression [1] in the test case mentioned
in $Subject.
[1]
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/7025/consoleFull
The failure is as follows,
[06:37
trips to improve performance (next).
Foot note: None of this is ready yet, and would take time, this is just
a *possible* direction that gluster core is going ahead with to address
various problems at scale.
Shyam
[1]
http://www.gluster.org/community/documentation/index.php/Features/caching
resolving this at all, as the is even before
creating the gluster volume, where the failure is noted.
Digging through to see what else I can glean from other logs available
in the failed test cases.
@Manu, any thoughts? TC failing: tests/features/weighted-rebalance.t
Shyam
On 05/29/2015 12:51 PM, Niels de Vos wrote:
Hi all,
today we had a discussion about how to get the status of reported bugs
more correct and up to date. It is something that has come up several
times already, but now we have a BIG solution as Pranith calls it.
The goal is rather simple, but is
.
Will leave others to chime in with the same.
Shyam
On 07/02/2015 09:34 AM, Pranith Kumar Karampuri wrote:
hi,
When glusterfs mount process is coming up all cluster xlators wait
for at least one event from all the children before propagating the
status upwards. Sometimes client xlator takes upto 2
On 05/21/2015 04:04 AM, Vijaikumar M wrote:
On Tuesday 19 May 2015 09:50 PM, Shyam wrote:
On 05/19/2015 11:23 AM, Vijaikumar M wrote:
Did that (in the attached script that I sent) and it still failed.
Please note:
- This dd command passes (or fails with EDQUOT)
- dd if=/dev/zero of=$N0
, but this is the first
time I am noticing this test fail, hence pinging the list.
Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 05/21/2015 09:33 AM, Vijaikumar M wrote:
On Thursday 21 May 2015 06:48 PM, Shyam wrote:
On 05/21/2015 04:04 AM, Vijaikumar M wrote:
On Tuesday 19 May 2015 09:50 PM, Shyam wrote:
On 05/19/2015 11:23 AM, Vijaikumar M wrote:
Did that (in the attached script that I sent) and it still failed
On 04/26/2015 11:34 AM, Vijay Bellur wrote:
On 04/25/2015 12:35 AM, Shyam wrote:
On 04/24/2015 11:06 AM, Anusha Rao wrote:
I had a few doubts regarding the conversion from gf_log to gf_msg:
1) Is it necessary to convert all gf_log messages to gf_msg ?
Yes, once we get to the point
On 05/19/2015 11:23 AM, Vijaikumar M wrote:
On Tuesday 19 May 2015 08:36 PM, Shyam wrote:
On 05/19/2015 08:10 AM, Raghavendra G wrote:
After discussion with Vijaykumar mallikarjuna and other inputs in this
thread, we are proposing all quota tests to comply to following
criteria:
* use dd
On 08/17/2015 01:19 AM, Raghavendra Gowdappa wrote:
- Original Message -
From: Raghavendra Gowdappa rgowd...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Cc: Sakshi Bansal saban...@redhat.com
Sent: Monday, 17 August, 2015 10:39:38 AM
Subject: [Gluster-devel] Serialization of
On 08/04/2015 08:57 PM, Joe Julian wrote:
On 08/04/2015 05:53 PM, Shyam wrote:
On 08/04/2015 12:55 PM, Hafeez Bana wrote:
All,
We've been evaluating glusterfs 3.2.7 on ubuntu 14.04 LTS. All tests
were run with event-thread matching cpu-cores and lookup-unhashed
turned off
I think you
project.
Shyam
[1] https://review.gerrithub.io/#/admin/projects/ShyamsundarR/glusterfs
[2] https://github.com/ShyamsundarR/glusterfs/tree/gl40_dht_playground
[3] http://www.gluster.org/pipermail/gluster-devel/2015-June/045449.html
[4] http://www.gluster.org/pipermail/gluster-devel/2015-June/045591
/2015 09:20 AM, Soumya Koduri wrote:
On 07/21/2015 02:49 PM, Poornima Gurusiddaiah wrote:
Hi Shyam,
Please find my reply inline.
Rgards,
Poornima
- Original Message -
From: Ira Cooper icoo...@redhat.com
To: Shyam srang...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent
On 10/23/2015 06:46 AM, Aravinda wrote:
Hi Gluster developers,
In this mail I am proposing troubleshooting documentation and
Gluster Tools infrastructure.
Tool to search in documentation
===
We recently added message Ids to each error messages in Gluster. Some
of
On 10/29/2015 02:06 AM, Pranith Kumar Karampuri wrote:
hi,
I want to understand how are you guys planning to integrate
NSR volumes to the existing CLIs. Here are some thoughts I had, wanted
to know your thoughts:
At the heart of both the replication/ec schemes we have
1)
On 10/27/2015 11:12 PM, Sankarshan Mukhopadhyay wrote:
On Mon, Oct 26, 2015 at 7:04 PM, Shyam <srang...@redhat.com> wrote:
Older idea on this was, to consume the logs and filter based on the message
IDs for those situations that can be remedied. The logs are hence the point
where the
rocess? I am punting this
to Vijay and Jeff :)
On 10/28/2015 08:32 PM, Shyam wrote:
Sending this along again.
- Are we decided on *experimental*?
- If so, what else needs attention in the patch below?
- (Re)views please... (views as in "What are your views on this?", not
"Have you
Sending this along again.
- Are we decided on *experimental*?
- If so, what else needs attention in the patch below?
- (Re)views please... (views as in "What are your views on this?", not
"Have you viewed this?" ;) )
Shyam
On 10/12/2015 02:18 PM, Shyam wrote:
In an effor
On 10/29/2015 07:29 PM, Jeff Darcy wrote:
On October 29, 2015 at 9:12:46 AM, Shyam (srang...@redhat.com) wrote:
Will code that NSR puts up in master be ready to ship when 3.8 is
branched?
Do we know when 3.8 will be branched?
This is an example not absolute, what I mean is when the next
On 10/23/2015 09:31 PM, Sankarshan Mukhopadhyay wrote:
On Fri, Oct 23, 2015 at 4:16 PM, Aravinda wrote:
Initial idea for Tools Framework:
-
A Shell/Python script which looks for the tool in plugins sub directory, if
exists pass all the
giving us a short dentry tree creation
ability of pGFID->name(GFID).
This of course changes the gluster RPC wire protocol, as we need to
encode/send pGFID as well in some cases (or could be done adding this to
the xdata payload.
Shyam
[1] http://nfs.sourceforge.net/#faq_c7
[2] ht
On 11/10/2015 01:36 AM, Kaushal M wrote:
On Tue, Nov 10, 2015 at 12:05 PM, Kaushal M <kshlms...@gmail.com> wrote:
On Fri, Nov 6, 2015 at 10:45 PM, Atin Mukherjee
<atin.mukherje...@gmail.com> wrote:
-Atin
Sent from one plus one
On Nov 6, 2015 7:50 PM, "Shyam" <s
On 10/29/2015 10:26 PM, Jeff Darcy wrote:
On October 29, 2015 at 8:42:50 PM, Shyam (srang...@redhat.com) wrote:
I assume this is about infra changes (as the first 2 points are for
some reason squashed in my reader). I think what you state is infra
(or other non-experimental) code impact due
to replay the demo, building from sources.
(Nov_5_2015_StatusUpdateDemo.log)
Thank you,
Shyam/Venky
On 11/03/2015 09:27 PM, Shyam wrote:
Hi,
Coming Thursday, i.e Nov-05-2015, @ 6:30 AM Eastern we are having a
short call to present DHT2 status (ics attached)
To join the meeting on a computer
On 11/06/2015 06:58 AM, Atin Mukherjee wrote:
On 11/06/2015 01:30 PM, Aravinda wrote:
regards
Aravinda
http://aravindavk.in
On 11/06/2015 12:28 PM, Avra Sengupta wrote:
Hi,
As almost all the components targeted for Gluster 4.0 have moved from
design phase to implementation phase on some
.
Shyam
[1]
https://github.com/ShyamsundarR/glusterfs/commit/663eeb98f6a51384c8745b8882e7c6c4f7b58a7c
[2] http://review.gluster.org/#/c/12321/1
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 10/11/2015 06:09 PM, Niels de Vos wrote:
On Fri, Oct 09, 2015 at 10:40:15AM -0400, Shyam wrote:
On 10/09/2015 12:07 AM, Atin Mukherjee wrote:
First of all my apologies for not going through the meeting blog before
sending my thoughts on how we plan to maintain GlusterD 2.0 [1
In an effort to push this along, update the change with suggested edits
and comments till now, request a review and further comments so that we
make this official at some (sooner) point in time.
http://review.gluster.org/#/c/12321/2
___
Gluster-devel
en as well?
What do others think?
This is just my thought and I would like to get a clarity on this.
Thanks,
Atin
[1] http://www.gluster.org/pipermail/gluster-devel/2015-October/046872.html
On 10/08/2015 11:35 PM, Shyam wrote:
Hi,
On checking yesterday's gluster meeting AIs and (late
On 10/09/2015 11:26 AM, Jeff Darcy wrote:
My position is that we should maximize visibility for other developers by doing
all work on review.gluster.org. If it doesn't affect existing tests, it should
even go on master. This includes:
* Minor changes (e.g. list.h or syncop.* in
On 08/31/2015 03:17 AM, Aravinda wrote:
Following Changes/ideas identified to improve the Geo-replication
Performance. Please add your ideas/issues to the list
1. Entry stime and Data/Meta stime
--
Now we use only one xattr to maintain the state of sync, called
On 09/03/2015 02:43 AM, Krutika Dhananjay wrote:
*From: *"Shyam" <srang...@redhat.com>
*To: *"Krutika Dhananjay" <kdhan...@redhat.com>
*Cc: *"Aravinda" <avish
On 09/02/2015 10:47 AM, Krutika Dhananjay wrote:
*From: *"Shyam" <srang...@redhat.com>
*To: *"Aravinda" <avish...@redhat.com>, "Gluster Devel"
<gluster-devel@gl
On 09/02/2015 03:12 AM, Aravinda wrote:
Geo-replication and Sharding Team today discussed about the approach
to make Sharding aware Geo-replication. Details are as below
Participants: Aravinda, Kotresh, Krutika, Rahul Hinduja, Vijay Bellur
- Both Master and Slave Volumes should be Sharded
On 12/09/2015 09:32 AM, Jeff Darcy wrote:
On December 9, 2015 at 7:07:06 AM, Ira Cooper (i...@redhat.com) wrote:
A simple "abort on failure" and let the higher levels clean it up is
probably right for the type of compounding I propose. It is what SMB2
does. So, if you get an error return
On 12/09/2015 12:52 AM, Pranith Kumar Karampuri wrote:
On 12/09/2015 10:39 AM, Prashanth Pai wrote:
However, I’d be even more comfortable with an even simpler approach that
avoids the need to solve what the database folks (who have dealt with
complex transactions for years) would tell us is a
much clarity about how to handle
errors in
that model. Encoding N sub-operations’ arguments in a linear structure
as Shyam proposes seems a bit cleaner that way. If I were to continue
down that route I’d suggest just having start_compound and end-compound
fops, plus an extra field (or by-conventio
On 12/08/2015 04:32 AM, Vijaikumar Mallikarjuna wrote:
Hi All,
Below is the design for '*GlusterFS User and Group Quotas', *please
provide your feedback on the same.
*_Developers:_*
Vijaikumar.M and Manikandan.S
*_Introduction:_*
User and Group quotas is to limit the amount of disk space
Sakshi,
In the doc. there is reference to the fact that when a client fixes a
layout it assigns the same dircommit hash to the layout which is
equivalent to the vol commit hash. I think this assumption is incorrect,
when a client heals the layout, the commit hash is set to 1
On 12/18/2015 04:00 AM, Vijaikumar Mallikarjuna wrote:
Hi All,
Here is the summary of discussion we had today on Quota-v2
Project, user and group quotas will use the same logic of accounting the
usage
Quota-v2 should be compatible with both DHT-v1 and DHT-v2
Project quotas will use gfid of
On 12/20/2015 10:57 PM, Vijaikumar Mallikarjuna wrote:
On Fri, Dec 18, 2015 at 8:28 PM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>> wrote:
On 12/18/2015 04:00 AM, Vijaikumar Mallikarjuna wrote:
Hi All,
Here is the summary of discussion
On 12/19/2015 01:13 AM, Venky Shankar wrote:
Shyam wrote:
On 12/18/2015 04:00 AM, Vijaikumar Mallikarjuna wrote:
Hi All,
Here is the summary of discussion we had today on Quota-v2
Project, user and group quotas will use the same logic of accounting the
usage
Quota-v2 should
approaches.
[4] slide 13 onwards talks about how cephfs does this. (see cephfs inode
backtraces)
Aravinda, could you put up a design for the same, and how and where this
is information is added etc. Would help review it from other xlators
perspective (like existing DHT).
Shyam
[3] http
On 05/24/2016 07:04 AM, Sankarshan Mukhopadhyay wrote:
On Wed, May 18, 2016 at 2:43 PM, Amye Scavarda wrote:
Said another way, if you wanted to be able to have someone contribute a
feature idea, what would be the best way?
Bugzilla? A google form? An email into the -devel
On 05/29/2016 10:08 PM, Sankarshan Mukhopadhyay wrote:
On Fri, May 20, 2016 at 7:30 PM, Shyam <srang...@redhat.com> wrote:
On 05/19/2016 10:25 PM, Pranith Kumar Karampuri wrote:
Once every 3 months i.e. option 3 sounds good to me.
+1 from my end.
Every 2 months seems to be a bit to
1 - 100 of 548 matches
Mail list logo