Re: [Gluster-devel] [Gluster-users] Crash in glusterfs!!!

2018-09-24 Thread Pranith Kumar Karampuri
On Mon, Sep 24, 2018 at 5:16 PM ABHISHEK PALIWAL 
wrote:

> Hi Pranith,
>
> As we know this problem is getting triggered at startup of the glusterd
> process when it received the SIGTERM.
>
> I think there is a problem in glusterfs code, if at startup someone sent
> the SIGTERM the exit handler should not be crash instead it should with
> some information.
>
> Could please let me know the possibility to fix it from glusterfs side?
>

I am not as confident as you about the RC you provided. If you could give
the steps to re-create, I will be happy to confirm that the RC is correct
and then I will send out the fix.


>
> Regards,
> Abhishek
>
> On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL 
>> wrote:
>>
>>> Could you please let me know about the bug in libc which you are talking.
>>>
>>
>> No, I mean, if you give the steps to reproduce, we will be able to pin
>> point if the issue is with libc or glusterfs.
>>
>>
>>>
>>> On Mon, Sep 24, 2018 at 2:01 PM Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>


 On Mon, Sep 24, 2018 at 1:57 PM ABHISHEK PALIWAL <
 abhishpali...@gmail.com> wrote:

> If you see the source code in cleanup_and_exit() we are getting the
> SIGSEGV crash when 'exit(0)' is triggered.
>

 yes, that is what I was mentioning earlier. It is crashing in libc. So
 either there is a bug in libc (glusterfs actually found 1 bug so far in
 libc, so I wouldn't rule out that possibility) or there is something that
 is happening in glusterfs which is leading to the problem.
 Valgrind/address-sanitizer would help find where the problem could be in
 some cases, so before reaching out libc developers, it is better to figure
 out where the problem is. Do you have steps to recreate it?


>
> On Mon, Sep 24, 2018 at 1:41 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Mon, Sep 24, 2018 at 1:36 PM ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Sanju,
>>>
>>> Do you have any update on this?
>>>
>>
>> This seems to happen while the process is dying, in libc. I am not
>> completely sure if there is anything glusterfs is contributing to it from
>> the bt at the moment. Do you have any steps to re-create this problem? It
>> is probably better to run the steps with valgrind/address-sanitizer and 
>> see
>> if it points to the problem in glusterfs.
>>
>>
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Fri, Sep 21, 2018 at 4:07 PM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Hi Sanju,

 Output of 't a a bt full'

 (gdb) t a a bt full



 Thread 7 (LWP 1743):

 #0  0x3fffa3ea7e88 in __lll_lock_wait (futex=0x0, private=0) at
 lowlevellock.c:43

 r4 = 128

 r7 = 0

 arg2 = 128

 r5 = 2

 r8 = 1

 r0 = 221

 r3 = 0

 r6 = 0

 arg1 = 0

 __err = 221

 __ret = 0

 #1  0x3fffa3e9ef64 in __GI___pthread_mutex_lock
 (mutex=0x100272a8) at ../nptl/pthread_mutex_lock.c:81

 __futex = 0x100272a8

 __PRETTY_FUNCTION__ = "__pthread_mutex_lock"

 type = 

 id = 

 #2  0x3fffa3f6ce8c in _gf_msg (domain=0x3fff98006c90
 "c_glusterfs-client-0", file=0x3fff9fb34de0 "client.c",
 function=0x3fff9fb34cd8 <__FUNCTION__.18849> "notify",

 line=, level=, errnum=>>> out>, trace=, msgid=114020,

 fmt=0x3fff9fb35350 "parent translators are ready, attempting
 connect on transport") at logging.c:2058

 ret = 

 msgstr = 

 ap = 

 this = 0x3fff980061f0

 ctx = 0x10027010

 callstr = '\000' 

 passcallstr = 0

 log_inited = 0

 __PRETTY_FUNCTION__ = "_gf_msg"

 #3  0x3fff9fb084ac in notify (this=0x3fff980061f0,
 event=, data=0x3fff98008c50) at client.c:2116

 conf = 0x3fff98056dd0

 __FUNCTION__ = "notify"

 #4  0x3fffa3f68ca0 in xlator_notify (xl=0x3fff980061f0,
 event=, data=) at xlator.c:491

 old_THIS = 0x3fff98008c50

 ret = 0

 #5  0x3fffa3f87700 in 

Re: [Gluster-devel] Glusto Happenings

2018-09-24 Thread Amar Tumballi
Planning to discuss this in next week's Gluster Maintainer's meeting.
Please make sure we have Glusto maintainers in the meeting, so we can have
a good syncup!

-Amar

On Wed, Sep 19, 2018 at 10:15 PM, Jonathan Holloway 
wrote:

> Sounds good. I'll talk to Vijay and Akarsha about providing updates on
> some of their activities with the test repo too.
>
> Cheers,
> Jonathan
>
> On Mon, Sep 17, 2018 at 7:37 PM Amye Scavarda  wrote:
>
>> Adding Maintainers as that's the group that will be more interested in
>> this.
>> Our next maintainers meeting is October 1st, want to present on what the
>> current status is there?
>> - amye
>>
>> On Mon, Sep 17, 2018 at 12:29 AM Jonathan Holloway 
>> wrote:
>>
>>> Hi Gluster-devel,
>>>
>>> It's been awhile, since we updated gluster-devel on things related to
>>> Glusto.
>>>
>>> The big thing in the works for Glusto is Python3 compatibility.
>>> A port is in progress, and the target is October to have a branch ready
>>> for testing. Look for another update here when that is available.
>>>
>>> Thanks to Vijay Avuthu for testing a change to the Python2 version of
>>> Carteplex (the cartesian product module in Glusto that drives the runs_on
>>> decorator used in Gluster tests). Tests inheriting from GlusterBaseClass
>>> have been using im_func to make calls against the base class setUp method.
>>> This change allows the use of super() as well as im_func.
>>>
>>> On a related note, the syntax for both im_func and super() changes in
>>> Python3. The "Developer Guide for Tests and Libraries" section of the
>>> glusterfs/glusto-tests docs currently shows 
>>> "GlusterBaseClass.setUp.im_func(self)",
>>> but will be updated with the preferred call for Python3.
>>>
>>> And lastly, you might have seen an issue with tests under Python2 where
>>> a run kicked off via py.test or /usr/bin/glusto would immediately fail with
>>> a message indicating gcc needs to be installed. The problem was specific to
>>> a recent update of PyTest and scandir, and the original workaround was to
>>> install gcc or a previous version of pytest and scandir. The scandir
>>> maintainer fixed the issue upstream with scandir 1.9.0 (available in PyPI).
>>>
>>> That's all for now.
>>>
>>> Cheers,
>>> Jonathan (loadtheacc)
>>>
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>> --
>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] gluster-gnfs missing in CentOS repos

2018-09-24 Thread Sahina Bose
Hi all,

gluster-gnfs rpms are missing in 4.0/4.1 repos in CentOS storage. Is this
intended?

thanks
sahina
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] On making performance.parallel-readdir as a default option

2018-09-24 Thread Soumya Koduri

Please find my comments inline.

On 9/22/18 8:56 AM, Raghavendra Gowdappa wrote:



On Fri, Sep 21, 2018 at 11:25 PM Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:


Hi all,

We've a feature performance.parallel-readdir [1] that is known to
improve performance of readdir operations [2][3][4]. The option is
especially useful when distribute scale is relatively large (>10)
and is known to improve performance of readdir operations even on
smaller scale of distribute count 1 [4].

However, this option is not enabled by default. I am here proposing
to make this as a default feature.

But, there are some important things to be addressed in
readdir-ahead (which is core part of parallel-readdir), before we
can do so:

To summarize issues with readdir-ahead:
* There seems to be one prominent problem of missing dentries with
parallel-readdir. There was one problem discussed on tech-list just
yesterday. I've heard about this recurrently earlier too. Not sure
whether this is the problem of missing unlink/rmdir/create etc fops
(see below) in readdir-ahead. ATM, no RCA.


IMHO, this is a must fix to enable this option by default.


* fixes to maintain stat-consistency in dentries pre-fetched have
not made into downstream yet (though merged upstream [5]).
* readdir-ahead doesn't implement directory modification fops like
rmdir/create/symlink/link/unlink/rename. This means cache won't be
updated wiith newer content, even on single mount till its consumed
by application or purged.


As you had explained, since this affects cache-consistency, this as well 
needs to be addressed.



* dht linkto-files should store relative positions of subvolumes
instead of absolute subvolume name, so that changes to immediate
child won't render them stale.


FWIU from your explanation, this may affect performance for a brief 
moment when the option is turned on but as such doesn't result in 
incorrect results. So considering that these options are usually 
configured at the beginning of the volume configuration and not toggled 
often, this may not be blocker.




* Features parallel-readdir depends on to be working should be
enabled automatically even though they were off earlier when
parallel-readdir is enabled [6].


Since readdir-ahead is one such option which was not turned on (by 
default) till now and most of the above mentioned issues are with 
readdir-ahead, will it be helpful if we enable only readdir-ahead for 
few releases, get enough testing done and then consider parallel-readdir?



Thanks,
Soumya


I've listed important known issues above. But we can discuss which
are the blockers for making this feature as a default.

Thoughts?

[1] http://review.gluster.org/#/c/16090/
[2]

https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
(sections on small directory)
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1628807#c35

[4] https://www.spinics.net/lists/gluster-users/msg34956.html
[5] http://review.gluster.org/#/c/glusterfs/+/20639/
[6] https://bugzilla.redhat.com/show_bug.cgi?id=1631406

regards,
Raghavendra


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel