On Wed, 2016-01-27 at 15:25 +0530, PankaJ Singh wrote:
>
> Hi,
>
> We are using gluster 3.7.6 on ubuntu 14.04. We are facing an issue
> with trashcan feature.
> Our scenario is as follow:
>
> 1. 2 node server (ubuntu 14.04 with glusterfs 3.7.6)
> 2. 1 client node (ubuntu 14.04)
> 3. I have
Sakshi has a fix for this:
Regards,
Nithya
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Venky Shankar" , "Gluster Devel"
>
> Cc: "Vijay Bellur" , "Raghavendra Gowdappa"
>
Sorry - forgot to provide the link:
http://review.gluster.org/#/c/13262/
Regards,
Nithya
- Original Message -
> From: "Nithya Balachandran"
> To: "Pranith Kumar Karampuri"
> Cc: "Gluster Devel"
> Sent: Thursday, 28
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Venky Shankar" , "Gluster Devel"
>
> Cc: "Vijay Bellur" , "Raghavendra Gowdappa"
> , "Nithya Balachandran"
>
+ Sakshi
- Original Message -
> From: "Raghavendra Gowdappa"
> To: "Pranith Kumar Karampuri"
> Cc: "Venky Shankar" , "Gluster Devel"
> , "Vijay Bellur"
> , "Nithya
On 01/28/2016 12:50 PM, Venky Shankar wrote:
Yes, that should be good. Better to have just one version of the routine. Also,
I
think Ravi found a bug in brick_up_status() [or the _1 version?].
http://review.gluster.org/12913 fixed it upstream already. It wasn't
sent to 3.7.
I think the patch
On 01/28/2016 02:59 PM, baul jianguo wrote:
http://pastebin.centos.org/38941/
client statedump,only the pid 27419,168030,208655 hang,you can search
this pid in the statedump file。
Could you take one more statedump please?
Pranith
On Wed, Jan 27, 2016 at 4:35 PM, Pranith Kumar Karampuri
Just checking if the changes made to brick_up_status by patch #12913 are
required in 3.7 as well (since it is not backported)
- Original Message -
From: "Ravishankar N"
To: "Venky Shankar" , "Sakshi Bansal"
Cc:
On Thu, Jan 28, 2016 at 12:17:58PM +0530, Raghavendra Talur wrote:
> Where do I find config in NetBSD which decides which location to dump core
> in?
sysctl kern.defcorename for the default location and name. It can be
overriden per process using sysctl proc.$$.corename
> Any particular reason
On Thu, Jan 28, 2016 at 12:17:58PM +0530, Raghavendra Talur wrote:
> Where do I find config in NetBSD which decides which location to dump core
> in?
I crafted the patch below, bbut it is probably much simplier to just
set kern.defcorename to /%n-%p.core on all VM slaves. I will do it.
diff
On Thu, Jan 28, 2016 at 12:00:41PM +0530, Raghavendra Talur wrote:
> Ok, RCA:
>
> In NetBSD cores are being generated in /d/backends/*/*.core
> run-tests.sh looks only for "/core*" when looking for cores.
>
> So, at the end of test run when regression.sh looks for core everywhere, it
> finds one
http://pastebin.centos.org/38941/
client statedump,only the pid 27419,168030,208655 hang,you can search
this pid in the statedump file。
On Wed, Jan 27, 2016 at 4:35 PM, Pranith Kumar Karampuri
wrote:
> Hi,
> If the hang appears on enabling client side io-threads then
On Thu, Jan 28, 2016 at 12:10:49PM +0530, Atin Mukherjee wrote:
> So does that mean we never analyzed any core reported by NetBSD
> regression failure? That's strange.
We got the cores from / but not from d/backends/*/ as I understand.
I am glad someone figured out the mystery.
--
Emmanuel
the client glusterfs gdb info, main thread id is 70800。
In the top output,70800 thread time 1263:30,70810 thread time
1321:10,other thread time too small。
(gdb) thread apply all bt
Thread 9 (Thread 0x7fc21acaf700 (LWP 70801)):
#0 0x7fc21cc0c535 in sigwait () from /lib64/libpthread.so.0
Hi All,
The minutes of todays minutes:
Meeting summary
---
* agenda https://public.pad.fsfe.org/p/gluster-bug-triage (hgowtham_,
12:01:59)
* roll call (hgowtham_, 12:02:10)
* Group Triage (hgowtham_, 12:06:22)
* LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
With baul jianguo's help I am able to see that FLUSH fops are hanging
for some reason.
pk1@localhost - ~/Downloads
17:02:13 :) ⚡ grep "unique=" client-dump1.txt
unique=3160758373
unique=2073075682
unique=1455047665
unique=0
pk1@localhost - ~/Downloads
17:02:21 :) ⚡ grep "unique="
Hi all,
The weekly bug triage is about to take place in ~100 minutes.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00
Hi,
I have downloaded glusterfs source rpm from :
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-21/SRPMS/
I extracted the source and I tried compiling and installing it. While
running "make install" I started getting error.
Here is the steps performed :
1)
Sorry for the late reply.
Pranith Kumar Karampuri 写于 2016/01/25 17:48:06:
> From: Pranith Kumar Karampuri
> To: li.ping...@zte.com.cn,
> Cc: li.y...@zte.com.cn, zhou.shigan...@zte.com.cn,
> liu.jianj...@zte.com.cn, yang.bi...@zte.com.cn
> Date:
So the way our throttling works is (intentionally) very simplistic.
(1) When someone mounts an NFS share, we tag the frame with a 32 bit hash of
the export name they were authorized to mount.
(2) io-stats keeps track of the "current rate" of fops we're seeing for that
particular mount, using a
Hey folks,
I just merged patch #13302 (and it's 3.7 equivalent) which fixes a scrubber
crash.
This was causing other patches to fail regression.
Requesting a rebase of patches (especially 3.7 pending) that were blocked due to
this.
Thanks,
Venky
> TBF isn't complicated at all - it's widely used for traffic shaping, cgroups,
> UML to rate limit disk I/O.
It's not complicated and it's widely used, but that doesn't mean it's
the right fit for our needs. Token buckets are good to create a
*ceiling* on resource utilization, but what if you
On 01/28/2016 07:05 PM, Venky Shankar wrote:
Hey folks,
I just merged patch #13302 (and it's 3.7 equivalent) which fixes a scrubber
crash.
This was causing other patches to fail regression.
Requesting a rebase of patches (especially 3.7 pending) that were blocked due to
this.
Thanks a lot
+Anoop, Jiffin
On 01/27/2016 03:25 PM, PankaJ Singh wrote:
Hi,
We are using gluster 3.7.6 on ubuntu 14.04. We are facing an issue
with trashcan feature.
Our scenario is as follow:
1. 2 node server (ubuntu 14.04 with glusterfs 3.7.6)
2. 1 client node (ubuntu 14.04)
3. I have created one
unless the patches are data-loss/crashes will not take any more other
than the ones which help make regressions consistent:
Final set:
http://review.gluster.org/#/c/12768/
http://review.gluster.org/#/c/13305/<< user asked for this on
gluster-users.
http://review.gluster.org/#/c/13119/
25 matches
Mail list logo